Oct 04 04:45:56 crc systemd[1]: Starting Kubernetes Kubelet... Oct 04 04:45:56 crc restorecon[4740]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:56 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 04 04:45:57 crc restorecon[4740]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 04 04:45:57 crc restorecon[4740]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 04 04:45:58 crc kubenswrapper[4802]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 04 04:45:58 crc kubenswrapper[4802]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 04 04:45:58 crc kubenswrapper[4802]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 04 04:45:58 crc kubenswrapper[4802]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 04 04:45:58 crc kubenswrapper[4802]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 04 04:45:58 crc kubenswrapper[4802]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.116025 4802 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122445 4802 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122485 4802 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122495 4802 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122503 4802 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122508 4802 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122514 4802 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122520 4802 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122527 4802 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122534 4802 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122541 4802 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122546 4802 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122551 4802 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122557 4802 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122562 4802 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122567 4802 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122573 4802 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122578 4802 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122583 4802 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122587 4802 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122592 4802 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122597 4802 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122602 4802 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122606 4802 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122611 4802 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122616 4802 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122620 4802 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122625 4802 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122629 4802 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122634 4802 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122657 4802 feature_gate.go:330] unrecognized feature gate: Example Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122663 4802 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122668 4802 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122673 4802 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122678 4802 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122683 4802 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122688 4802 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122694 4802 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122701 4802 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122706 4802 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122712 4802 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122718 4802 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122726 4802 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122732 4802 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122737 4802 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122743 4802 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122749 4802 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122756 4802 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122761 4802 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122768 4802 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122774 4802 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122780 4802 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122785 4802 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122792 4802 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122797 4802 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122802 4802 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122821 4802 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122827 4802 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122832 4802 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122837 4802 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122842 4802 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122849 4802 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122854 4802 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122859 4802 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122865 4802 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122871 4802 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122902 4802 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122908 4802 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122913 4802 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122918 4802 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122923 4802 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.122927 4802 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123574 4802 flags.go:64] FLAG: --address="0.0.0.0" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123590 4802 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123601 4802 flags.go:64] FLAG: --anonymous-auth="true" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123610 4802 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123617 4802 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123623 4802 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123631 4802 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123656 4802 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123663 4802 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123669 4802 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123675 4802 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123681 4802 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123688 4802 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123693 4802 flags.go:64] FLAG: --cgroup-root="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123699 4802 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123704 4802 flags.go:64] FLAG: --client-ca-file="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123710 4802 flags.go:64] FLAG: --cloud-config="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123715 4802 flags.go:64] FLAG: --cloud-provider="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123720 4802 flags.go:64] FLAG: --cluster-dns="[]" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123729 4802 flags.go:64] FLAG: --cluster-domain="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123734 4802 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123740 4802 flags.go:64] FLAG: --config-dir="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123746 4802 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123752 4802 flags.go:64] FLAG: --container-log-max-files="5" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123759 4802 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123765 4802 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123772 4802 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123778 4802 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123784 4802 flags.go:64] FLAG: --contention-profiling="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123790 4802 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123798 4802 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123804 4802 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123810 4802 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123818 4802 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123824 4802 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123830 4802 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123836 4802 flags.go:64] FLAG: --enable-load-reader="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123842 4802 flags.go:64] FLAG: --enable-server="true" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123847 4802 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123854 4802 flags.go:64] FLAG: --event-burst="100" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123860 4802 flags.go:64] FLAG: --event-qps="50" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123866 4802 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123872 4802 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123878 4802 flags.go:64] FLAG: --eviction-hard="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123886 4802 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123892 4802 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123898 4802 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123904 4802 flags.go:64] FLAG: --eviction-soft="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123910 4802 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123915 4802 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123921 4802 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123927 4802 flags.go:64] FLAG: --experimental-mounter-path="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123933 4802 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123938 4802 flags.go:64] FLAG: --fail-swap-on="true" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123944 4802 flags.go:64] FLAG: --feature-gates="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123951 4802 flags.go:64] FLAG: --file-check-frequency="20s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123957 4802 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123962 4802 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123968 4802 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123974 4802 flags.go:64] FLAG: --healthz-port="10248" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123980 4802 flags.go:64] FLAG: --help="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123987 4802 flags.go:64] FLAG: --hostname-override="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123993 4802 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.123999 4802 flags.go:64] FLAG: --http-check-frequency="20s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124004 4802 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124010 4802 flags.go:64] FLAG: --image-credential-provider-config="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124017 4802 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124024 4802 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124030 4802 flags.go:64] FLAG: --image-service-endpoint="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124036 4802 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124042 4802 flags.go:64] FLAG: --kube-api-burst="100" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124048 4802 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124054 4802 flags.go:64] FLAG: --kube-api-qps="50" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124059 4802 flags.go:64] FLAG: --kube-reserved="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124066 4802 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124071 4802 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124076 4802 flags.go:64] FLAG: --kubelet-cgroups="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124082 4802 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124088 4802 flags.go:64] FLAG: --lock-file="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124093 4802 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124099 4802 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124105 4802 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124113 4802 flags.go:64] FLAG: --log-json-split-stream="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124119 4802 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124125 4802 flags.go:64] FLAG: --log-text-split-stream="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124131 4802 flags.go:64] FLAG: --logging-format="text" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124136 4802 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124143 4802 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124148 4802 flags.go:64] FLAG: --manifest-url="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124154 4802 flags.go:64] FLAG: --manifest-url-header="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124161 4802 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124167 4802 flags.go:64] FLAG: --max-open-files="1000000" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124174 4802 flags.go:64] FLAG: --max-pods="110" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124180 4802 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124186 4802 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124192 4802 flags.go:64] FLAG: --memory-manager-policy="None" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124198 4802 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124204 4802 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124210 4802 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124216 4802 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124233 4802 flags.go:64] FLAG: --node-status-max-images="50" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124239 4802 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124245 4802 flags.go:64] FLAG: --oom-score-adj="-999" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124251 4802 flags.go:64] FLAG: --pod-cidr="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124257 4802 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124266 4802 flags.go:64] FLAG: --pod-manifest-path="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124272 4802 flags.go:64] FLAG: --pod-max-pids="-1" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124277 4802 flags.go:64] FLAG: --pods-per-core="0" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124283 4802 flags.go:64] FLAG: --port="10250" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124289 4802 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124294 4802 flags.go:64] FLAG: --provider-id="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124299 4802 flags.go:64] FLAG: --qos-reserved="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124305 4802 flags.go:64] FLAG: --read-only-port="10255" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124311 4802 flags.go:64] FLAG: --register-node="true" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124317 4802 flags.go:64] FLAG: --register-schedulable="true" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124322 4802 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124332 4802 flags.go:64] FLAG: --registry-burst="10" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124338 4802 flags.go:64] FLAG: --registry-qps="5" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124343 4802 flags.go:64] FLAG: --reserved-cpus="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124349 4802 flags.go:64] FLAG: --reserved-memory="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124357 4802 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124363 4802 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124369 4802 flags.go:64] FLAG: --rotate-certificates="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124374 4802 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124380 4802 flags.go:64] FLAG: --runonce="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124386 4802 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124392 4802 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124398 4802 flags.go:64] FLAG: --seccomp-default="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124404 4802 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124409 4802 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124416 4802 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124422 4802 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124429 4802 flags.go:64] FLAG: --storage-driver-password="root" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124434 4802 flags.go:64] FLAG: --storage-driver-secure="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124440 4802 flags.go:64] FLAG: --storage-driver-table="stats" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124446 4802 flags.go:64] FLAG: --storage-driver-user="root" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124451 4802 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124457 4802 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124464 4802 flags.go:64] FLAG: --system-cgroups="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124469 4802 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124478 4802 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124484 4802 flags.go:64] FLAG: --tls-cert-file="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124489 4802 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124496 4802 flags.go:64] FLAG: --tls-min-version="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124502 4802 flags.go:64] FLAG: --tls-private-key-file="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124507 4802 flags.go:64] FLAG: --topology-manager-policy="none" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124513 4802 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124519 4802 flags.go:64] FLAG: --topology-manager-scope="container" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124525 4802 flags.go:64] FLAG: --v="2" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124532 4802 flags.go:64] FLAG: --version="false" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124539 4802 flags.go:64] FLAG: --vmodule="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124547 4802 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.124552 4802 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124702 4802 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124710 4802 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124715 4802 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124721 4802 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124726 4802 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124732 4802 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124737 4802 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124742 4802 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124747 4802 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124752 4802 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124757 4802 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124761 4802 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124766 4802 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124771 4802 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124776 4802 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124780 4802 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124785 4802 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124790 4802 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124794 4802 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124799 4802 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124804 4802 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124809 4802 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124814 4802 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124819 4802 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124825 4802 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124832 4802 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124837 4802 feature_gate.go:330] unrecognized feature gate: Example Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124842 4802 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124853 4802 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124858 4802 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124863 4802 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124868 4802 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124873 4802 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124878 4802 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124883 4802 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124887 4802 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124892 4802 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124897 4802 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124902 4802 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124907 4802 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124912 4802 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124917 4802 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124921 4802 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124926 4802 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124931 4802 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124936 4802 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124940 4802 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124945 4802 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124950 4802 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124955 4802 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124961 4802 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124967 4802 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124971 4802 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124977 4802 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124981 4802 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124986 4802 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124992 4802 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.124997 4802 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.125002 4802 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.125008 4802 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.125016 4802 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.125021 4802 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.125026 4802 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.125030 4802 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.125035 4802 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.125040 4802 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.125046 4802 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.125053 4802 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.125059 4802 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.125069 4802 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.125074 4802 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.125091 4802 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.139608 4802 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.139683 4802 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139809 4802 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139824 4802 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139833 4802 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139840 4802 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139847 4802 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139853 4802 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139859 4802 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139866 4802 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139873 4802 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139878 4802 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139884 4802 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139889 4802 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139895 4802 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139900 4802 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139906 4802 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139911 4802 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139917 4802 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139922 4802 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139927 4802 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139933 4802 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139938 4802 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139944 4802 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139950 4802 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139956 4802 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139962 4802 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139968 4802 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139973 4802 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139978 4802 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139984 4802 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139989 4802 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.139996 4802 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140002 4802 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140008 4802 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140013 4802 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140019 4802 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140024 4802 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140029 4802 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140037 4802 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140042 4802 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140050 4802 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140057 4802 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140063 4802 feature_gate.go:330] unrecognized feature gate: Example Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140068 4802 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140074 4802 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140081 4802 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140087 4802 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140095 4802 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140101 4802 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140107 4802 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140112 4802 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140118 4802 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140123 4802 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140128 4802 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140134 4802 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140142 4802 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140149 4802 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140155 4802 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140163 4802 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140169 4802 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140176 4802 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140181 4802 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140188 4802 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140195 4802 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140201 4802 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140207 4802 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140214 4802 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140221 4802 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140227 4802 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140233 4802 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140271 4802 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140281 4802 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.140291 4802 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140495 4802 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140509 4802 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140519 4802 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140527 4802 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140534 4802 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140542 4802 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140549 4802 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140557 4802 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140563 4802 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140571 4802 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140577 4802 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140586 4802 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140594 4802 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140602 4802 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140609 4802 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140616 4802 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140622 4802 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140628 4802 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140634 4802 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140658 4802 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140664 4802 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140670 4802 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140677 4802 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140683 4802 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140688 4802 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140694 4802 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140707 4802 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140713 4802 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140719 4802 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140725 4802 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140734 4802 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140743 4802 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140749 4802 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140756 4802 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140762 4802 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140768 4802 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140775 4802 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140782 4802 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140789 4802 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140795 4802 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140801 4802 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140807 4802 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140813 4802 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140818 4802 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140825 4802 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140830 4802 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140836 4802 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140842 4802 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140847 4802 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140853 4802 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140858 4802 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140864 4802 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140869 4802 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140876 4802 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140884 4802 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140891 4802 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140896 4802 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140902 4802 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140907 4802 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140914 4802 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140919 4802 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140925 4802 feature_gate.go:330] unrecognized feature gate: Example Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140931 4802 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140937 4802 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140942 4802 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140947 4802 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140953 4802 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140958 4802 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140963 4802 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140969 4802 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.140974 4802 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.140983 4802 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.142181 4802 server.go:940] "Client rotation is on, will bootstrap in background" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.147330 4802 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.147456 4802 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.149558 4802 server.go:997] "Starting client certificate rotation" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.149589 4802 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.149933 4802 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-18 09:11:57.775951956 +0000 UTC Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.150103 4802 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1084h25m59.625857283s for next certificate rotation Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.179812 4802 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.181712 4802 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.196369 4802 log.go:25] "Validated CRI v1 runtime API" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.229504 4802 log.go:25] "Validated CRI v1 image API" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.231747 4802 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.240604 4802 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-04-04-40-57-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.240669 4802 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.257050 4802 manager.go:217] Machine: {Timestamp:2025-10-04 04:45:58.253093832 +0000 UTC m=+0.661094477 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:827fad0a-2530-4d29-b9e6-eca7ec571a16 BootID:c452f803-794d-4c12-9ed0-ead681c77619 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:19:c9:fd Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:19:c9:fd Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2f:36:30 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a3:fc:a9 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:19:31:c6 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:59:18:fb Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:4d:9c:ee Speed:-1 Mtu:1496} {Name:eth10 MacAddress:be:f5:b6:09:34:e8 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9e:e4:11:32:0a:e8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.257939 4802 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.258243 4802 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.258981 4802 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.259277 4802 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.259325 4802 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.259814 4802 topology_manager.go:138] "Creating topology manager with none policy" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.259836 4802 container_manager_linux.go:303] "Creating device plugin manager" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.260613 4802 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.260707 4802 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.261605 4802 state_mem.go:36] "Initialized new in-memory state store" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.262423 4802 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.267967 4802 kubelet.go:418] "Attempting to sync node with API server" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.268016 4802 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.268082 4802 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.268105 4802 kubelet.go:324] "Adding apiserver pod source" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.268143 4802 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.276537 4802 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.277899 4802 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.278545 4802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.278617 4802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.278863 4802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.278985 4802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.280538 4802 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.285248 4802 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.285754 4802 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.285875 4802 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.285979 4802 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.286088 4802 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.286188 4802 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.286347 4802 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.286478 4802 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.286586 4802 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.286759 4802 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.286917 4802 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.287023 4802 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.288449 4802 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.289393 4802 server.go:1280] "Started kubelet" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.290071 4802 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.290113 4802 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.290944 4802 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.291145 4802 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 04 04:45:58 crc systemd[1]: Started Kubernetes Kubelet. Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.293356 4802 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.293387 4802 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.293491 4802 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.293520 4802 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.293605 4802 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.293670 4802 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.293782 4802 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 18:15:04.981758073 +0000 UTC Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.294030 4802 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1453h29m6.6877339s for next certificate rotation Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.295136 4802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.295237 4802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.295806 4802 factory.go:55] Registering systemd factory Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.295848 4802 factory.go:221] Registration of the systemd container factory successfully Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.296368 4802 factory.go:153] Registering CRI-O factory Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.296403 4802 factory.go:221] Registration of the crio container factory successfully Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.296517 4802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="200ms" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.298352 4802 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.298504 4802 factory.go:103] Registering Raw factory Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.298673 4802 manager.go:1196] Started watching for new ooms in manager Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.300119 4802 manager.go:319] Starting recovery of all containers Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.301206 4802 server.go:460] "Adding debug handlers to kubelet server" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.311839 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312012 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312079 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312141 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312199 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312301 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312370 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312434 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312506 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.309863 4802 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.57:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186b30466340b3b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-04 04:45:58.289339314 +0000 UTC m=+0.697339979,LastTimestamp:2025-10-04 04:45:58.289339314 +0000 UTC m=+0.697339979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312573 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312667 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312717 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312740 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312766 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312789 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312807 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312830 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312853 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312873 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312894 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312912 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312931 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312953 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312976 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.312994 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.313014 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.313038 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.313058 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.313078 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.313100 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.313122 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.313140 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315333 4802 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315379 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315401 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315419 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315437 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315489 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315508 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315526 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315545 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315564 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315582 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315600 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315620 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315664 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315686 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315759 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315777 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315795 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315815 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315835 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315853 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315878 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315901 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315919 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315935 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315952 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315971 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.315987 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316002 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316077 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316095 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316112 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316128 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316143 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316161 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316177 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316195 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316212 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316231 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316252 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316268 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316285 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316303 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316319 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316335 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316351 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316367 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316383 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316399 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316415 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316433 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316450 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316468 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316488 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316506 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316523 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316540 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316553 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316591 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316606 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316619 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316632 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316672 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316691 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316707 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316720 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316734 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316746 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316758 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316770 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316816 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316838 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316868 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316893 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316913 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316930 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316946 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316962 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316977 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.316992 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317005 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317018 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317032 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317047 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317064 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317081 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317100 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317116 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317132 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317148 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317167 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317181 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317196 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317214 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317232 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317247 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317261 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317278 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317298 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317317 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317336 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317355 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317374 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317392 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317409 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317426 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317442 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317458 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317476 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317492 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317508 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317524 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317540 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317559 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317576 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317591 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317608 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317625 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317662 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317679 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317701 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317718 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317736 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317753 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317770 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317788 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317804 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317819 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317834 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317851 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317866 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317882 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.317899 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318031 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318050 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318072 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318087 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318107 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318168 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318185 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318201 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318216 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318233 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318247 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318262 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318276 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318292 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318307 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318320 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318336 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318352 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318368 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318384 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318398 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318414 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318428 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318444 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318459 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318475 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318496 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318513 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318529 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318582 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318599 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318615 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318630 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318667 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318683 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318698 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318713 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318727 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318741 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318755 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318769 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318790 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318806 4802 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318818 4802 reconstruct.go:97] "Volume reconstruction finished" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.318827 4802 reconciler.go:26] "Reconciler: start to sync state" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.339672 4802 manager.go:324] Recovery completed Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.349752 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.351371 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.351410 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.351421 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.352782 4802 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.352813 4802 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.352841 4802 state_mem.go:36] "Initialized new in-memory state store" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.356364 4802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.358359 4802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.358445 4802 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.358495 4802 kubelet.go:2335] "Starting kubelet main sync loop" Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.358568 4802 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.359852 4802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.359988 4802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.386409 4802 policy_none.go:49] "None policy: Start" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.388117 4802 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.388214 4802 state_mem.go:35] "Initializing new in-memory state store" Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.394029 4802 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.454825 4802 manager.go:334] "Starting Device Plugin manager" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.455017 4802 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.455040 4802 server.go:79] "Starting device plugin registration server" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.455917 4802 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.455949 4802 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.456220 4802 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.456535 4802 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.456562 4802 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.458852 4802 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.459043 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.460773 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.460827 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.460847 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.461086 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.461490 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.461603 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.462552 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.462674 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.462694 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.462967 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.463142 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.463209 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.463607 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.463676 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.463691 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.464201 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.464243 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.464256 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.464267 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.464281 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.464291 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.464440 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.464602 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.464661 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.466761 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.466790 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.466800 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.467026 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.467094 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.467125 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.467141 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.467800 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.467891 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.468349 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.468370 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.468379 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.468668 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.468703 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.470030 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.470048 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.470057 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.470170 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.470277 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.470313 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.473776 4802 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.500040 4802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="400ms" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.520743 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.520911 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.520984 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.521118 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.521160 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.521179 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.521197 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.521214 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.521230 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.521246 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.521262 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.521308 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.521358 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.521428 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.521471 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.556983 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.558376 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.558431 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.558444 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.558479 4802 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.559165 4802 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.57:6443: connect: connection refused" node="crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.623975 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624065 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624102 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624145 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624220 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624282 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624325 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624356 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624382 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624438 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624504 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624366 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624559 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624514 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624594 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624635 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624677 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624504 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624703 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624730 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624748 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624753 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624789 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624792 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624838 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624854 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624881 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.625137 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.625193 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.624491 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.759582 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.761548 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.761623 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.761684 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.761721 4802 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.762473 4802 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.57:6443: connect: connection refused" node="crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.802684 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.822480 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.828597 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.848176 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-56a5e720947de1822d121206cfbafe95f8232df5ebebea5d4dee7633483bc5cf WatchSource:0}: Error finding container 56a5e720947de1822d121206cfbafe95f8232df5ebebea5d4dee7633483bc5cf: Status 404 returned error can't find the container with id 56a5e720947de1822d121206cfbafe95f8232df5ebebea5d4dee7633483bc5cf Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.850892 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-48295b31044b4ad868669ac01ce71cde31c0f84d4e33549757dc5d17b09f4af0 WatchSource:0}: Error finding container 48295b31044b4ad868669ac01ce71cde31c0f84d4e33549757dc5d17b09f4af0: Status 404 returned error can't find the container with id 48295b31044b4ad868669ac01ce71cde31c0f84d4e33549757dc5d17b09f4af0 Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.851879 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.854050 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-45c4924d7efdcc89c20c4164c53444672b061ce0261395739b03a359d03149d0 WatchSource:0}: Error finding container 45c4924d7efdcc89c20c4164c53444672b061ce0261395739b03a359d03149d0: Status 404 returned error can't find the container with id 45c4924d7efdcc89c20c4164c53444672b061ce0261395739b03a359d03149d0 Oct 04 04:45:58 crc kubenswrapper[4802]: I1004 04:45:58.858220 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.872964 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-1e937364066a521b92f0e9c5e6d31ea5ba73854b119c2cee578ce5cd2a5f4531 WatchSource:0}: Error finding container 1e937364066a521b92f0e9c5e6d31ea5ba73854b119c2cee578ce5cd2a5f4531: Status 404 returned error can't find the container with id 1e937364066a521b92f0e9c5e6d31ea5ba73854b119c2cee578ce5cd2a5f4531 Oct 04 04:45:58 crc kubenswrapper[4802]: W1004 04:45:58.879669 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-62e400f9e92a959faf5195be4bd0c2ade2d4a81b4bc3b421f4faf503363a2c2e WatchSource:0}: Error finding container 62e400f9e92a959faf5195be4bd0c2ade2d4a81b4bc3b421f4faf503363a2c2e: Status 404 returned error can't find the container with id 62e400f9e92a959faf5195be4bd0c2ade2d4a81b4bc3b421f4faf503363a2c2e Oct 04 04:45:58 crc kubenswrapper[4802]: E1004 04:45:58.902206 4802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="800ms" Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.163298 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.164982 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.165055 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.165069 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.165106 4802 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 04 04:45:59 crc kubenswrapper[4802]: E1004 04:45:59.165790 4802 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.57:6443: connect: connection refused" node="crc" Oct 04 04:45:59 crc kubenswrapper[4802]: W1004 04:45:59.191807 4802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:45:59 crc kubenswrapper[4802]: E1004 04:45:59.191946 4802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Oct 04 04:45:59 crc kubenswrapper[4802]: W1004 04:45:59.202390 4802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:45:59 crc kubenswrapper[4802]: E1004 04:45:59.202530 4802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Oct 04 04:45:59 crc kubenswrapper[4802]: W1004 04:45:59.283020 4802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:45:59 crc kubenswrapper[4802]: E1004 04:45:59.283121 4802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.292100 4802 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.363957 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"56a5e720947de1822d121206cfbafe95f8232df5ebebea5d4dee7633483bc5cf"} Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.365059 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"62e400f9e92a959faf5195be4bd0c2ade2d4a81b4bc3b421f4faf503363a2c2e"} Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.366967 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1e937364066a521b92f0e9c5e6d31ea5ba73854b119c2cee578ce5cd2a5f4531"} Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.368134 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"45c4924d7efdcc89c20c4164c53444672b061ce0261395739b03a359d03149d0"} Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.369701 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"48295b31044b4ad868669ac01ce71cde31c0f84d4e33549757dc5d17b09f4af0"} Oct 04 04:45:59 crc kubenswrapper[4802]: W1004 04:45:59.385770 4802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:45:59 crc kubenswrapper[4802]: E1004 04:45:59.385887 4802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Oct 04 04:45:59 crc kubenswrapper[4802]: E1004 04:45:59.703564 4802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="1.6s" Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.966061 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.967928 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.967991 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.968012 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:45:59 crc kubenswrapper[4802]: I1004 04:45:59.968052 4802 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 04 04:45:59 crc kubenswrapper[4802]: E1004 04:45:59.968860 4802 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.57:6443: connect: connection refused" node="crc" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.291787 4802 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.375822 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890"} Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.375905 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.375923 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd"} Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.375942 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0"} Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.375957 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf"} Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.376997 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.377047 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.377066 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.378227 4802 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc" exitCode=0 Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.378289 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.378307 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc"} Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.379577 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.379600 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.379609 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.380531 4802 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53" exitCode=0 Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.380594 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.380608 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53"} Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.381247 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.381267 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.381276 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.381876 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.382446 4802 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="595bda9c56531562dfdf1dcb7e688683cb3aa1ddc1e129639b8775a0ea5d4d83" exitCode=0 Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.382477 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"595bda9c56531562dfdf1dcb7e688683cb3aa1ddc1e129639b8775a0ea5d4d83"} Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.382531 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.382547 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.382558 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.382570 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.383882 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.383916 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.383934 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.385896 4802 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35" exitCode=0 Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.385948 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35"} Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.385986 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.387182 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.387218 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.387228 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.747949 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:46:00 crc kubenswrapper[4802]: I1004 04:46:00.782517 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:46:00 crc kubenswrapper[4802]: W1004 04:46:00.854684 4802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:46:00 crc kubenswrapper[4802]: E1004 04:46:00.854815 4802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.292033 4802 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:46:01 crc kubenswrapper[4802]: E1004 04:46:01.304717 4802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.57:6443: connect: connection refused" interval="3.2s" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.391848 4802 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3" exitCode=0 Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.391903 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3"} Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.392088 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.393601 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.393665 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.393683 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.394879 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d09f6c101f943f749a7ab23fdcc689dfccd6a2bf61239c14e40640bd7a52cb6f"} Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.394958 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.396099 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.396127 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.396138 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.400145 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ea44d3ab9908fd2c4f84ec1460fbb0196b33192b9b96bc91bf0ec4538e0df61a"} Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.400327 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"91767f1c050e3ca4cb16f42ce2e057857bd56d9876897961a69e715c39587d9a"} Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.400346 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"13725e1893b7df3529b6803f15e7981f4f036ecae6fc93c6fd930aceb911302b"} Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.400447 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.404688 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.404740 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.404754 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.414024 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad"} Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.414095 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80"} Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.414127 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b"} Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.414142 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908"} Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.414098 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.415358 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.415411 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.415426 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.569563 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.570991 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.571042 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.571058 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:01 crc kubenswrapper[4802]: I1004 04:46:01.571089 4802 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 04 04:46:01 crc kubenswrapper[4802]: E1004 04:46:01.571725 4802 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.57:6443: connect: connection refused" node="crc" Oct 04 04:46:01 crc kubenswrapper[4802]: W1004 04:46:01.591373 4802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:46:01 crc kubenswrapper[4802]: E1004 04:46:01.591458 4802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Oct 04 04:46:01 crc kubenswrapper[4802]: W1004 04:46:01.961738 4802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.57:6443: connect: connection refused Oct 04 04:46:01 crc kubenswrapper[4802]: E1004 04:46:01.961900 4802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.57:6443: connect: connection refused" logger="UnhandledError" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.421714 4802 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af" exitCode=0 Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.421849 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af"} Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.421940 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.423594 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.423716 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.423747 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.432993 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"adbbdd960c267e4e3724a8e065a30c4ed8235c9bd83a91a29231e78130b9d903"} Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.433075 4802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.433144 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.433201 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.433293 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.434033 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.434607 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.435738 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.435765 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.435777 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.435912 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.435936 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.435948 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.436594 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.436618 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.436630 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.436653 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.436669 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:02 crc kubenswrapper[4802]: I1004 04:46:02.436678 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.319957 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.440194 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f"} Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.440256 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.440273 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.440295 4802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.440277 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7"} Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.440389 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e"} Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.440375 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.441755 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.441797 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.441800 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.441808 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.441810 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.441818 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.441828 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.441833 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.441846 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:03 crc kubenswrapper[4802]: I1004 04:46:03.718959 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.449572 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29"} Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.449711 4802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.449755 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.449802 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.449727 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea"} Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.451966 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.452027 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.452051 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.452145 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.452186 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.452203 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.772118 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.774269 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.774345 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.774373 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:04 crc kubenswrapper[4802]: I1004 04:46:04.774424 4802 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 04 04:46:05 crc kubenswrapper[4802]: I1004 04:46:05.452262 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:05 crc kubenswrapper[4802]: I1004 04:46:05.453702 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:05 crc kubenswrapper[4802]: I1004 04:46:05.453784 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:05 crc kubenswrapper[4802]: I1004 04:46:05.453806 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:06 crc kubenswrapper[4802]: I1004 04:46:06.409548 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:46:06 crc kubenswrapper[4802]: I1004 04:46:06.409784 4802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 04 04:46:06 crc kubenswrapper[4802]: I1004 04:46:06.409909 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:06 crc kubenswrapper[4802]: I1004 04:46:06.411277 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:06 crc kubenswrapper[4802]: I1004 04:46:06.411333 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:06 crc kubenswrapper[4802]: I1004 04:46:06.411347 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:07 crc kubenswrapper[4802]: I1004 04:46:07.790359 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:46:07 crc kubenswrapper[4802]: I1004 04:46:07.790607 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:07 crc kubenswrapper[4802]: I1004 04:46:07.792456 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:07 crc kubenswrapper[4802]: I1004 04:46:07.792495 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:07 crc kubenswrapper[4802]: I1004 04:46:07.792509 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:08 crc kubenswrapper[4802]: I1004 04:46:08.232383 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 04 04:46:08 crc kubenswrapper[4802]: I1004 04:46:08.232740 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:08 crc kubenswrapper[4802]: I1004 04:46:08.234838 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:08 crc kubenswrapper[4802]: I1004 04:46:08.234901 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:08 crc kubenswrapper[4802]: I1004 04:46:08.234916 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:08 crc kubenswrapper[4802]: E1004 04:46:08.474241 4802 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 04 04:46:08 crc kubenswrapper[4802]: I1004 04:46:08.602356 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:46:08 crc kubenswrapper[4802]: I1004 04:46:08.602726 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:08 crc kubenswrapper[4802]: I1004 04:46:08.607558 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:08 crc kubenswrapper[4802]: I1004 04:46:08.607731 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:08 crc kubenswrapper[4802]: I1004 04:46:08.607840 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:09 crc kubenswrapper[4802]: I1004 04:46:09.569442 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:46:09 crc kubenswrapper[4802]: I1004 04:46:09.569752 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:09 crc kubenswrapper[4802]: I1004 04:46:09.571354 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:09 crc kubenswrapper[4802]: I1004 04:46:09.571389 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:09 crc kubenswrapper[4802]: I1004 04:46:09.571399 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:09 crc kubenswrapper[4802]: I1004 04:46:09.574819 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:46:10 crc kubenswrapper[4802]: I1004 04:46:10.468245 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:10 crc kubenswrapper[4802]: I1004 04:46:10.469393 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:10 crc kubenswrapper[4802]: I1004 04:46:10.469456 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:10 crc kubenswrapper[4802]: I1004 04:46:10.469474 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:10 crc kubenswrapper[4802]: I1004 04:46:10.635007 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 04 04:46:10 crc kubenswrapper[4802]: I1004 04:46:10.635266 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:10 crc kubenswrapper[4802]: I1004 04:46:10.637178 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:10 crc kubenswrapper[4802]: I1004 04:46:10.637248 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:10 crc kubenswrapper[4802]: I1004 04:46:10.637260 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:10 crc kubenswrapper[4802]: I1004 04:46:10.791005 4802 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 04 04:46:10 crc kubenswrapper[4802]: I1004 04:46:10.791100 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 04 04:46:12 crc kubenswrapper[4802]: I1004 04:46:12.293688 4802 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 04 04:46:12 crc kubenswrapper[4802]: W1004 04:46:12.465442 4802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 04 04:46:12 crc kubenswrapper[4802]: I1004 04:46:12.465549 4802 trace.go:236] Trace[1160945101]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Oct-2025 04:46:02.463) (total time: 10001ms): Oct 04 04:46:12 crc kubenswrapper[4802]: Trace[1160945101]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (04:46:12.465) Oct 04 04:46:12 crc kubenswrapper[4802]: Trace[1160945101]: [10.00194366s] [10.00194366s] END Oct 04 04:46:12 crc kubenswrapper[4802]: E1004 04:46:12.465571 4802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 04 04:46:13 crc kubenswrapper[4802]: I1004 04:46:13.144202 4802 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 04 04:46:13 crc kubenswrapper[4802]: I1004 04:46:13.144785 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 04 04:46:13 crc kubenswrapper[4802]: I1004 04:46:13.149501 4802 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 04 04:46:13 crc kubenswrapper[4802]: I1004 04:46:13.149587 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 04 04:46:13 crc kubenswrapper[4802]: I1004 04:46:13.477733 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 04 04:46:13 crc kubenswrapper[4802]: I1004 04:46:13.479497 4802 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="adbbdd960c267e4e3724a8e065a30c4ed8235c9bd83a91a29231e78130b9d903" exitCode=255 Oct 04 04:46:13 crc kubenswrapper[4802]: I1004 04:46:13.479551 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"adbbdd960c267e4e3724a8e065a30c4ed8235c9bd83a91a29231e78130b9d903"} Oct 04 04:46:13 crc kubenswrapper[4802]: I1004 04:46:13.479736 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:13 crc kubenswrapper[4802]: I1004 04:46:13.480903 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:13 crc kubenswrapper[4802]: I1004 04:46:13.480943 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:13 crc kubenswrapper[4802]: I1004 04:46:13.480955 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:13 crc kubenswrapper[4802]: I1004 04:46:13.481542 4802 scope.go:117] "RemoveContainer" containerID="adbbdd960c267e4e3724a8e065a30c4ed8235c9bd83a91a29231e78130b9d903" Oct 04 04:46:14 crc kubenswrapper[4802]: I1004 04:46:14.484171 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 04 04:46:14 crc kubenswrapper[4802]: I1004 04:46:14.485530 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d"} Oct 04 04:46:14 crc kubenswrapper[4802]: I1004 04:46:14.485718 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:14 crc kubenswrapper[4802]: I1004 04:46:14.486965 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:14 crc kubenswrapper[4802]: I1004 04:46:14.487007 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:14 crc kubenswrapper[4802]: I1004 04:46:14.487027 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:16 crc kubenswrapper[4802]: I1004 04:46:16.413892 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:46:16 crc kubenswrapper[4802]: I1004 04:46:16.414067 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:16 crc kubenswrapper[4802]: I1004 04:46:16.414213 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:46:16 crc kubenswrapper[4802]: I1004 04:46:16.415259 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:16 crc kubenswrapper[4802]: I1004 04:46:16.415324 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:16 crc kubenswrapper[4802]: I1004 04:46:16.415335 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:16 crc kubenswrapper[4802]: I1004 04:46:16.419341 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:46:16 crc kubenswrapper[4802]: I1004 04:46:16.491927 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:16 crc kubenswrapper[4802]: I1004 04:46:16.492854 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:16 crc kubenswrapper[4802]: I1004 04:46:16.492899 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:16 crc kubenswrapper[4802]: I1004 04:46:16.492912 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:16 crc kubenswrapper[4802]: I1004 04:46:16.634754 4802 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 04 04:46:17 crc kubenswrapper[4802]: I1004 04:46:17.494511 4802 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 04 04:46:17 crc kubenswrapper[4802]: I1004 04:46:17.495790 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:17 crc kubenswrapper[4802]: I1004 04:46:17.495862 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:17 crc kubenswrapper[4802]: I1004 04:46:17.495878 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.143837 4802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.148166 4802 trace.go:236] Trace[1911100003]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Oct-2025 04:46:04.519) (total time: 13628ms): Oct 04 04:46:18 crc kubenswrapper[4802]: Trace[1911100003]: ---"Objects listed" error: 13628ms (04:46:18.148) Oct 04 04:46:18 crc kubenswrapper[4802]: Trace[1911100003]: [13.628657034s] [13.628657034s] END Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.148207 4802 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.153162 4802 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.157143 4802 trace.go:236] Trace[1690348380]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Oct-2025 04:46:04.835) (total time: 13322ms): Oct 04 04:46:18 crc kubenswrapper[4802]: Trace[1690348380]: ---"Objects listed" error: 13321ms (04:46:18.156) Oct 04 04:46:18 crc kubenswrapper[4802]: Trace[1690348380]: [13.322059746s] [13.322059746s] END Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.157179 4802 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.161598 4802 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.161603 4802 trace.go:236] Trace[1734841523]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Oct-2025 04:46:05.636) (total time: 12525ms): Oct 04 04:46:18 crc kubenswrapper[4802]: Trace[1734841523]: ---"Objects listed" error: 12524ms (04:46:18.161) Oct 04 04:46:18 crc kubenswrapper[4802]: Trace[1734841523]: [12.525029069s] [12.525029069s] END Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.161849 4802 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.162021 4802 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.164002 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.164851 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.164949 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.165067 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.165156 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:18Z","lastTransitionTime":"2025-10-04T04:46:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.180734 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.185853 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.185928 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.185949 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.185994 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.186014 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:18Z","lastTransitionTime":"2025-10-04T04:46:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.201870 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.206451 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.206515 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.206533 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.206563 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.206577 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:18Z","lastTransitionTime":"2025-10-04T04:46:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.218102 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.222014 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.222065 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.222078 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.222100 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.222128 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:18Z","lastTransitionTime":"2025-10-04T04:46:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.233877 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.237956 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.238005 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.238018 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.238043 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.238055 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:18Z","lastTransitionTime":"2025-10-04T04:46:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.250266 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.250385 4802 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.252402 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.252439 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.252450 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.252472 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.252491 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:18Z","lastTransitionTime":"2025-10-04T04:46:18Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.282504 4802 apiserver.go:52] "Watching apiserver" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.287367 4802 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.287757 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.288153 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.288251 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.288344 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.288383 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.288565 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.288723 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.288779 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.288829 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.289027 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.293040 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.293347 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.297450 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.297464 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.297609 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.297727 4802 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.297465 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.297652 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.297733 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.299679 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.330042 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-fjmgk"] Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.330429 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fjmgk" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.332495 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.332924 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.333476 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.353890 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.353929 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.353951 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.353974 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.353993 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354009 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354026 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354040 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354056 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354071 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354088 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354105 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354120 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354135 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354150 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354169 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354186 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354205 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354220 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354237 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354253 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354271 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354287 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354307 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354321 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354336 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354378 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354394 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354408 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354427 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354441 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354458 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354472 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354449 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354440 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354489 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354650 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354674 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354718 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354735 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354750 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354768 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354801 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354802 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354825 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354849 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354882 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354902 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354921 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354922 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354953 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.354994 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355013 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355050 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355069 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355145 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355163 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355177 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355327 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355349 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355373 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355412 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355395 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355430 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355524 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355557 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355583 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355675 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355730 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355761 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355776 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.357678 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.357721 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.357731 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.357748 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.357759 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:18Z","lastTransitionTime":"2025-10-04T04:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355895 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355963 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.356218 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.356251 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.356448 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.356493 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.356810 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.357005 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.357056 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.357085 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.357273 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:46:18.857233135 +0000 UTC m=+21.265233760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.355784 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.358293 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.358318 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.358341 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.358359 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.358377 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.358395 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.358411 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.358427 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.358445 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.358462 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.358667 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359107 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359159 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359179 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359248 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359286 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359309 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359333 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359356 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359377 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359400 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359423 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359443 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359473 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359495 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359516 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359536 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359546 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359554 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359676 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359713 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359748 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359787 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359819 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359851 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359883 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359914 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359948 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.359980 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360009 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360037 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360067 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360105 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360135 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360161 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360189 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360214 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360239 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360264 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360286 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360321 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360347 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360370 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360395 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360420 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360444 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360470 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360494 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360522 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360548 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360573 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360596 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360624 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360680 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360710 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360739 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360768 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360793 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360828 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360852 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360885 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360910 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360934 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360959 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.360984 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361009 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361036 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361446 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361473 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361501 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361527 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361553 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361581 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361612 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361658 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361687 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361713 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361737 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361768 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361791 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361817 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361845 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361871 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361901 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361934 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361964 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.361994 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362022 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362047 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362077 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362121 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362147 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362171 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362198 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362221 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362247 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362276 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362300 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362323 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362349 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362378 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362402 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362430 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362456 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362500 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362529 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362553 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362580 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362709 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362739 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362764 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362792 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362818 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362864 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362891 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362917 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.362942 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363001 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363053 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363082 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363107 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363134 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363170 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363234 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363267 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363301 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363330 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363359 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363386 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363412 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363440 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363468 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363491 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363522 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363551 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363587 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363606 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363616 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363762 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363785 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363801 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363812 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363822 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363834 4802 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363845 4802 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363856 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363867 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363877 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363889 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363901 4802 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363912 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363923 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363933 4802 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363945 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363955 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363966 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363977 4802 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.363987 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.364275 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.364273 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.364578 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.365961 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.366331 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.366378 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.366475 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.366806 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.366846 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.366946 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.366972 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.366989 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.366997 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.367038 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.367105 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.368130 4802 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.371865 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.373886 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.373986 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.374792 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.376948 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.378407 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.378817 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.379044 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.379373 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.379603 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.379692 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.379730 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.379920 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.380069 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.380426 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.380846 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.381332 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.381396 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.381411 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.381440 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.381654 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.381847 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.381987 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.382223 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.383652 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.383849 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.384261 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.384433 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.384618 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.385396 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.385445 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.385470 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.385961 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.386063 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.386942 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.387168 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.388076 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.388730 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.388954 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.389652 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.392433 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.392602 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.393270 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.396147 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.396151 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.396590 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.396720 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.396837 4802 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.396904 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:18.896884077 +0000 UTC m=+21.304884892 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.397095 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.397530 4802 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.397558 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.397559 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.397583 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:18.897572086 +0000 UTC m=+21.305572921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.397809 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.397871 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.398126 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.398237 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.398487 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.398519 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.398690 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.398878 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.398892 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.398921 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.399430 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.399671 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.399997 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.400336 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.400553 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.400995 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.401347 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.401432 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.401758 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.403600 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.404322 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.405206 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.407840 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.408717 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.408727 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.408825 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.409584 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.410069 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.410403 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.410549 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.410664 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.411024 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.411175 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.411497 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.411666 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.411774 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.411999 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.412148 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.412323 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.412160 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.412425 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.412755 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.412996 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.413031 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.412995 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.413172 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.413375 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.413745 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.414089 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.413146 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.414371 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.414744 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.414873 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.416195 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.416368 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.417018 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.417164 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.417295 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.417489 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.417848 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.418036 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.418366 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.418657 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.419409 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.419458 4802 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.419576 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:18.919548596 +0000 UTC m=+21.327549221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.419614 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.419901 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.419942 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.420142 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.420235 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.420743 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.420993 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.421060 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.421082 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.421100 4802 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.421161 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:18.92113946 +0000 UTC m=+21.329140095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.421528 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.422705 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.422810 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.424509 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.425411 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.425723 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.425815 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.425931 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.426174 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.426289 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.428767 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.428858 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.430501 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.430607 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.432265 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.433460 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.434257 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.435684 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.436027 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.436701 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.438341 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.439406 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.440260 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.460788 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.461516 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.463194 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.464041 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.464528 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 04 04:46:18 crc kubenswrapper[4802]: W1004 04:46:18.464755 4802 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~projected/kube-api-access-x4zgh Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.464799 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.464766 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25tnf\" (UniqueName: \"kubernetes.io/projected/e180d740-f48b-4755-b3ad-088f40b010ed-kube-api-access-25tnf\") pod \"node-resolver-fjmgk\" (UID: \"e180d740-f48b-4755-b3ad-088f40b010ed\") " pod="openshift-dns/node-resolver-fjmgk" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.464923 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.464961 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465008 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e180d740-f48b-4755-b3ad-088f40b010ed-hosts-file\") pod \"node-resolver-fjmgk\" (UID: \"e180d740-f48b-4755-b3ad-088f40b010ed\") " pod="openshift-dns/node-resolver-fjmgk" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465115 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465163 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465370 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465411 4802 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465429 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465442 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465454 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465464 4802 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465474 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465485 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465504 4802 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465518 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465529 4802 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465573 4802 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465586 4802 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465599 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465609 4802 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465619 4802 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465630 4802 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465656 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465666 4802 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465676 4802 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465686 4802 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465697 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465707 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465717 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465726 4802 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465736 4802 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465745 4802 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465755 4802 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465763 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465773 4802 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465782 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465792 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465802 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465812 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465821 4802 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465831 4802 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465841 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465851 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465861 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465873 4802 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465884 4802 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465897 4802 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465911 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465929 4802 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465944 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465956 4802 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465967 4802 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465978 4802 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465989 4802 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.465998 4802 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466008 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466018 4802 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466029 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466040 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466052 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466063 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466072 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466082 4802 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466092 4802 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466102 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466111 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466120 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466131 4802 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466140 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466149 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466159 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466169 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466178 4802 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466187 4802 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466196 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466206 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466216 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466226 4802 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466235 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466244 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466256 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466265 4802 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466276 4802 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466286 4802 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466296 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466304 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466313 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466321 4802 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466330 4802 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466338 4802 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466347 4802 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466356 4802 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466365 4802 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466374 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466383 4802 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466393 4802 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466402 4802 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466411 4802 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466421 4802 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466430 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466441 4802 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466450 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466459 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466470 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466480 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466491 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466500 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466509 4802 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466518 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466528 4802 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466544 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466554 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466563 4802 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466573 4802 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466582 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466592 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466600 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466610 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466618 4802 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.466627 4802 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.467050 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.467563 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.467597 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.467606 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.467623 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.467634 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:18Z","lastTransitionTime":"2025-10-04T04:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.468836 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.469329 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.469348 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.469623 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471327 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471368 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471380 4802 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471390 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471399 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471409 4802 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471418 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471427 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471436 4802 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471445 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471455 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471463 4802 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471481 4802 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471492 4802 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471503 4802 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471512 4802 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471522 4802 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471532 4802 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471541 4802 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471550 4802 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471560 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471570 4802 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471581 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471591 4802 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471601 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471610 4802 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471620 4802 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471628 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471657 4802 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471667 4802 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471676 4802 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471685 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471694 4802 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471704 4802 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.471713 4802 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.474181 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.475087 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.475093 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.475606 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.475748 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.479995 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.480821 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.481634 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.481772 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.481890 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.482830 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.483009 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.483104 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.483117 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.483518 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.483585 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.483596 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.483849 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.483994 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.485616 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.485787 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.486088 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.489967 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.490737 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.493065 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.496442 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.499780 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.501457 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.504700 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.504829 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.506134 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.507831 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.509979 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.511391 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.512874 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.516790 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.527667 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.538734 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.548556 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.559634 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.570382 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.570418 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.570429 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.570445 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.570455 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:18Z","lastTransitionTime":"2025-10-04T04:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.571205 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573017 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25tnf\" (UniqueName: \"kubernetes.io/projected/e180d740-f48b-4755-b3ad-088f40b010ed-kube-api-access-25tnf\") pod \"node-resolver-fjmgk\" (UID: \"e180d740-f48b-4755-b3ad-088f40b010ed\") " pod="openshift-dns/node-resolver-fjmgk" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573084 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e180d740-f48b-4755-b3ad-088f40b010ed-hosts-file\") pod \"node-resolver-fjmgk\" (UID: \"e180d740-f48b-4755-b3ad-088f40b010ed\") " pod="openshift-dns/node-resolver-fjmgk" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573121 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573133 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573143 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573153 4802 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573162 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573172 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573183 4802 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573193 4802 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573204 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573215 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573224 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573234 4802 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573244 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573253 4802 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573262 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573271 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573280 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573290 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573300 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573310 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573319 4802 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573328 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573338 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573346 4802 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.573355 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e180d740-f48b-4755-b3ad-088f40b010ed-hosts-file\") pod \"node-resolver-fjmgk\" (UID: \"e180d740-f48b-4755-b3ad-088f40b010ed\") " pod="openshift-dns/node-resolver-fjmgk" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.579037 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.588142 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.588266 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25tnf\" (UniqueName: \"kubernetes.io/projected/e180d740-f48b-4755-b3ad-088f40b010ed-kube-api-access-25tnf\") pod \"node-resolver-fjmgk\" (UID: \"e180d740-f48b-4755-b3ad-088f40b010ed\") " pod="openshift-dns/node-resolver-fjmgk" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.609898 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.616084 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.622100 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 04 04:46:18 crc kubenswrapper[4802]: W1004 04:46:18.625615 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-9c5d6b853527b6619fa163fc24f6df95f0df978988719c8286bed47c23eacd6e WatchSource:0}: Error finding container 9c5d6b853527b6619fa163fc24f6df95f0df978988719c8286bed47c23eacd6e: Status 404 returned error can't find the container with id 9c5d6b853527b6619fa163fc24f6df95f0df978988719c8286bed47c23eacd6e Oct 04 04:46:18 crc kubenswrapper[4802]: W1004 04:46:18.636363 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-e7b5a306533e2c9664459efc98c2e1fa690b4a524c9ec04a627e50a46570e31b WatchSource:0}: Error finding container e7b5a306533e2c9664459efc98c2e1fa690b4a524c9ec04a627e50a46570e31b: Status 404 returned error can't find the container with id e7b5a306533e2c9664459efc98c2e1fa690b4a524c9ec04a627e50a46570e31b Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.643016 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fjmgk" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.678428 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.678479 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.678494 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.678521 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.678534 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:18Z","lastTransitionTime":"2025-10-04T04:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:18 crc kubenswrapper[4802]: W1004 04:46:18.747687 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode180d740_f48b_4755_b3ad_088f40b010ed.slice/crio-be1368afefc0e8e1b35b4bc66b1f079cb8f99436d311f543718b0d4525a5b654 WatchSource:0}: Error finding container be1368afefc0e8e1b35b4bc66b1f079cb8f99436d311f543718b0d4525a5b654: Status 404 returned error can't find the container with id be1368afefc0e8e1b35b4bc66b1f079cb8f99436d311f543718b0d4525a5b654 Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.783065 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.783534 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.783549 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.783569 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.783581 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:18Z","lastTransitionTime":"2025-10-04T04:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.877484 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.877821 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:46:19.877794059 +0000 UTC m=+22.285794684 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.888173 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.888203 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.888230 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.888245 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.888258 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:18Z","lastTransitionTime":"2025-10-04T04:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.905098 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.919839 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.921623 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.922383 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.934233 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.945290 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.955243 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.971176 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.978671 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.978713 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.978741 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.978764 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.978890 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.978909 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.978906 4802 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.979009 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.979038 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.979092 4802 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.979023 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:19.979001423 +0000 UTC m=+22.387002048 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.978915 4802 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.978922 4802 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.979241 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:19.979219789 +0000 UTC m=+22.387220464 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.979285 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:19.979277 +0000 UTC m=+22.387277735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:18 crc kubenswrapper[4802]: E1004 04:46:18.979301 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:19.979294651 +0000 UTC m=+22.387295376 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.984395 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.990836 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.990884 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.990901 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.990925 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.990950 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:18Z","lastTransitionTime":"2025-10-04T04:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:18 crc kubenswrapper[4802]: I1004 04:46:18.994908 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.006881 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.018603 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.029124 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.040506 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.052519 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.061868 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.073789 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.083668 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.093752 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.093806 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.093819 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.093839 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.093852 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:19Z","lastTransitionTime":"2025-10-04T04:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.196277 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.196335 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.196352 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.196376 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.196395 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:19Z","lastTransitionTime":"2025-10-04T04:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.299294 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.299335 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.299347 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.299385 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.299399 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:19Z","lastTransitionTime":"2025-10-04T04:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.359384 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.359570 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.401916 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.401968 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.401978 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.402010 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.402021 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:19Z","lastTransitionTime":"2025-10-04T04:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.504528 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.504629 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.504673 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.504713 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.504734 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:19Z","lastTransitionTime":"2025-10-04T04:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.504990 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fjmgk" event={"ID":"e180d740-f48b-4755-b3ad-088f40b010ed","Type":"ContainerStarted","Data":"be1368afefc0e8e1b35b4bc66b1f079cb8f99436d311f543718b0d4525a5b654"} Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.506283 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e7b5a306533e2c9664459efc98c2e1fa690b4a524c9ec04a627e50a46570e31b"} Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.507314 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4c22fe8e4a6492319caa5e51a7117b8c03c3ff36aa6a810c387242770d61ddd8"} Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.508399 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9c5d6b853527b6619fa163fc24f6df95f0df978988719c8286bed47c23eacd6e"} Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.515026 4802 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.607315 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.607360 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.607373 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.607393 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.607407 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:19Z","lastTransitionTime":"2025-10-04T04:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.710035 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.710106 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.710118 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.710140 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.710154 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:19Z","lastTransitionTime":"2025-10-04T04:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.813407 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.813464 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.813478 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.813499 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.813513 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:19Z","lastTransitionTime":"2025-10-04T04:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.885996 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.886259 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:46:21.886219306 +0000 UTC m=+24.294219941 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.896113 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-dc98r"] Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.896570 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.897939 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-6jpj5"] Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.898163 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.898974 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.899062 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.899078 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.899237 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.899238 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.899307 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-gp55j"] Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.900129 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.900368 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.900396 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.901753 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.901813 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.901897 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.903778 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.903858 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.911292 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.917163 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.917207 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.917219 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.917237 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.917250 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:19Z","lastTransitionTime":"2025-10-04T04:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.927873 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.941113 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.953342 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.967216 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.985255 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.986580 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-multus-socket-dir-parent\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.986658 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-run-k8s-cni-cncf-io\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.986690 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/951838a5-12ca-41a9-a0b2-df95499f89ac-cnibin\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.986723 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-cnibin\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.986794 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-var-lib-kubelet\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.986856 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-multus-cni-dir\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.986897 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szkqc\" (UniqueName: \"kubernetes.io/projected/611d63c9-e554-40be-aab2-f2ca43f6827b-kube-api-access-szkqc\") pod \"machine-config-daemon-dc98r\" (UID: \"611d63c9-e554-40be-aab2-f2ca43f6827b\") " pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.986929 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-multus-conf-dir\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.986949 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-run-multus-certs\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.986989 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/951838a5-12ca-41a9-a0b2-df95499f89ac-cni-binary-copy\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987039 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcrl9\" (UniqueName: \"kubernetes.io/projected/c1c56664-b32b-475a-89eb-55910da58338-kube-api-access-bcrl9\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987086 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/951838a5-12ca-41a9-a0b2-df95499f89ac-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987135 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987166 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-system-cni-dir\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987189 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-run-netns\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987216 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-var-lib-cni-bin\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.987271 4802 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.987344 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:21.987322726 +0000 UTC m=+24.395323351 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987397 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2w78\" (UniqueName: \"kubernetes.io/projected/951838a5-12ca-41a9-a0b2-df95499f89ac-kube-api-access-p2w78\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987450 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/951838a5-12ca-41a9-a0b2-df95499f89ac-system-cni-dir\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987475 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c1c56664-b32b-475a-89eb-55910da58338-cni-binary-copy\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987501 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-var-lib-cni-multus\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987533 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987562 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987585 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/951838a5-12ca-41a9-a0b2-df95499f89ac-os-release\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987613 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/611d63c9-e554-40be-aab2-f2ca43f6827b-proxy-tls\") pod \"machine-config-daemon-dc98r\" (UID: \"611d63c9-e554-40be-aab2-f2ca43f6827b\") " pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987656 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987710 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-hostroot\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987735 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c1c56664-b32b-475a-89eb-55910da58338-multus-daemon-config\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.987757 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.987849 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.987866 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987794 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/951838a5-12ca-41a9-a0b2-df95499f89ac-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987929 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/611d63c9-e554-40be-aab2-f2ca43f6827b-mcd-auth-proxy-config\") pod \"machine-config-daemon-dc98r\" (UID: \"611d63c9-e554-40be-aab2-f2ca43f6827b\") " pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.987878 4802 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.987968 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-os-release\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.987983 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:21.987973944 +0000 UTC m=+24.395974569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.987917 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.988000 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-etc-kubernetes\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.988013 4802 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:19 crc kubenswrapper[4802]: I1004 04:46:19.988019 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/611d63c9-e554-40be-aab2-f2ca43f6827b-rootfs\") pod \"machine-config-daemon-dc98r\" (UID: \"611d63c9-e554-40be-aab2-f2ca43f6827b\") " pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.988053 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:21.988041496 +0000 UTC m=+24.396042231 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.987808 4802 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:19 crc kubenswrapper[4802]: E1004 04:46:19.988095 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:21.988088657 +0000 UTC m=+24.396089282 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.008812 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.022579 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.022664 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.022679 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.022699 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.022710 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:20Z","lastTransitionTime":"2025-10-04T04:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.026128 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.034224 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.043368 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.055620 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.065517 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.073951 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.086201 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.088671 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/951838a5-12ca-41a9-a0b2-df95499f89ac-os-release\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.088724 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/611d63c9-e554-40be-aab2-f2ca43f6827b-proxy-tls\") pod \"machine-config-daemon-dc98r\" (UID: \"611d63c9-e554-40be-aab2-f2ca43f6827b\") " pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.088747 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/951838a5-12ca-41a9-a0b2-df95499f89ac-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.088834 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-hostroot\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.088848 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/951838a5-12ca-41a9-a0b2-df95499f89ac-os-release\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.088858 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c1c56664-b32b-475a-89eb-55910da58338-multus-daemon-config\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.088917 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/611d63c9-e554-40be-aab2-f2ca43f6827b-mcd-auth-proxy-config\") pod \"machine-config-daemon-dc98r\" (UID: \"611d63c9-e554-40be-aab2-f2ca43f6827b\") " pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.088948 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-os-release\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.088971 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-etc-kubernetes\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.088991 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/611d63c9-e554-40be-aab2-f2ca43f6827b-rootfs\") pod \"machine-config-daemon-dc98r\" (UID: \"611d63c9-e554-40be-aab2-f2ca43f6827b\") " pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089031 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-multus-socket-dir-parent\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089050 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-run-k8s-cni-cncf-io\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089074 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-cnibin\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089096 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/951838a5-12ca-41a9-a0b2-df95499f89ac-cnibin\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089122 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-etc-kubernetes\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089117 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-var-lib-kubelet\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089163 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-run-k8s-cni-cncf-io\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089168 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-multus-cni-dir\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089190 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-multus-conf-dir\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089206 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-cnibin\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089209 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-run-multus-certs\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089229 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/951838a5-12ca-41a9-a0b2-df95499f89ac-cni-binary-copy\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089246 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szkqc\" (UniqueName: \"kubernetes.io/projected/611d63c9-e554-40be-aab2-f2ca43f6827b-kube-api-access-szkqc\") pod \"machine-config-daemon-dc98r\" (UID: \"611d63c9-e554-40be-aab2-f2ca43f6827b\") " pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089252 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/611d63c9-e554-40be-aab2-f2ca43f6827b-rootfs\") pod \"machine-config-daemon-dc98r\" (UID: \"611d63c9-e554-40be-aab2-f2ca43f6827b\") " pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089256 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-multus-socket-dir-parent\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089279 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-system-cni-dir\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089308 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcrl9\" (UniqueName: \"kubernetes.io/projected/c1c56664-b32b-475a-89eb-55910da58338-kube-api-access-bcrl9\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089329 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/951838a5-12ca-41a9-a0b2-df95499f89ac-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089354 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-run-netns\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089374 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-var-lib-cni-bin\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089394 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2w78\" (UniqueName: \"kubernetes.io/projected/951838a5-12ca-41a9-a0b2-df95499f89ac-kube-api-access-p2w78\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089415 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/951838a5-12ca-41a9-a0b2-df95499f89ac-system-cni-dir\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089442 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c1c56664-b32b-475a-89eb-55910da58338-cni-binary-copy\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089444 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-multus-cni-dir\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089459 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-var-lib-cni-multus\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089473 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-multus-conf-dir\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089494 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-run-multus-certs\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089514 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-var-lib-cni-multus\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089620 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c1c56664-b32b-475a-89eb-55910da58338-multus-daemon-config\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089310 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-system-cni-dir\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089231 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/951838a5-12ca-41a9-a0b2-df95499f89ac-cnibin\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089143 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-var-lib-kubelet\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089095 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-os-release\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089835 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-run-netns\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089859 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/951838a5-12ca-41a9-a0b2-df95499f89ac-system-cni-dir\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089870 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-hostroot\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089931 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1c56664-b32b-475a-89eb-55910da58338-host-var-lib-cni-bin\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.089992 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/611d63c9-e554-40be-aab2-f2ca43f6827b-mcd-auth-proxy-config\") pod \"machine-config-daemon-dc98r\" (UID: \"611d63c9-e554-40be-aab2-f2ca43f6827b\") " pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.090291 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/951838a5-12ca-41a9-a0b2-df95499f89ac-cni-binary-copy\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.090398 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/951838a5-12ca-41a9-a0b2-df95499f89ac-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.090452 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c1c56664-b32b-475a-89eb-55910da58338-cni-binary-copy\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.097905 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.103572 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/951838a5-12ca-41a9-a0b2-df95499f89ac-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.108714 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.119585 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.124837 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.124901 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.124921 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.124942 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.124955 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:20Z","lastTransitionTime":"2025-10-04T04:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.128882 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.138724 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.148660 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.161534 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/611d63c9-e554-40be-aab2-f2ca43f6827b-proxy-tls\") pod \"machine-config-daemon-dc98r\" (UID: \"611d63c9-e554-40be-aab2-f2ca43f6827b\") " pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.161876 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szkqc\" (UniqueName: \"kubernetes.io/projected/611d63c9-e554-40be-aab2-f2ca43f6827b-kube-api-access-szkqc\") pod \"machine-config-daemon-dc98r\" (UID: \"611d63c9-e554-40be-aab2-f2ca43f6827b\") " pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.161908 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2w78\" (UniqueName: \"kubernetes.io/projected/951838a5-12ca-41a9-a0b2-df95499f89ac-kube-api-access-p2w78\") pod \"multus-additional-cni-plugins-gp55j\" (UID: \"951838a5-12ca-41a9-a0b2-df95499f89ac\") " pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.161951 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcrl9\" (UniqueName: \"kubernetes.io/projected/c1c56664-b32b-475a-89eb-55910da58338-kube-api-access-bcrl9\") pod \"multus-6jpj5\" (UID: \"c1c56664-b32b-475a-89eb-55910da58338\") " pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.213989 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.221334 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6jpj5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.227165 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.227202 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.227219 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.227235 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.227247 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:20Z","lastTransitionTime":"2025-10-04T04:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.227537 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gp55j" Oct 04 04:46:20 crc kubenswrapper[4802]: W1004 04:46:20.233591 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod611d63c9_e554_40be_aab2_f2ca43f6827b.slice/crio-d5b70c63b06bbef9c1431200655eebd007a40b5fb361bbb83768099654933dad WatchSource:0}: Error finding container d5b70c63b06bbef9c1431200655eebd007a40b5fb361bbb83768099654933dad: Status 404 returned error can't find the container with id d5b70c63b06bbef9c1431200655eebd007a40b5fb361bbb83768099654933dad Oct 04 04:46:20 crc kubenswrapper[4802]: W1004 04:46:20.247821 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod951838a5_12ca_41a9_a0b2_df95499f89ac.slice/crio-0f3a574a12867d85d00ae9f71ab339b1508687221fb337d4d4bd9ba8994b6e64 WatchSource:0}: Error finding container 0f3a574a12867d85d00ae9f71ab339b1508687221fb337d4d4bd9ba8994b6e64: Status 404 returned error can't find the container with id 0f3a574a12867d85d00ae9f71ab339b1508687221fb337d4d4bd9ba8994b6e64 Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.283951 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bw8lw"] Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.284959 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.293317 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.293367 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.293508 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.293367 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.293596 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.293726 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.294402 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.307824 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.318697 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.330109 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.330337 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.330432 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.330561 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.330842 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:20Z","lastTransitionTime":"2025-10-04T04:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.336225 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.348084 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.358369 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.358857 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:20 crc kubenswrapper[4802]: E1004 04:46:20.359073 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.359376 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:20 crc kubenswrapper[4802]: E1004 04:46:20.359448 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.362887 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.370039 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.392622 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-openvswitch\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.392879 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-systemd-units\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393002 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-cni-netd\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393040 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393088 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-kubelet\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393114 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-run-netns\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393185 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-etc-openvswitch\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393219 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-node-log\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393270 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-slash\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393326 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-run-ovn-kubernetes\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393361 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393388 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovnkube-script-lib\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393442 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-var-lib-openvswitch\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393465 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-ovn\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393487 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-log-socket\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393523 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6czkw\" (UniqueName: \"kubernetes.io/projected/11ac83cd-2981-4717-8cb4-2ca3e302461a-kube-api-access-6czkw\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393634 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-cni-bin\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393695 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovnkube-config\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393722 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-env-overrides\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393746 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovn-node-metrics-cert\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.393777 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-systemd\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.407896 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.426944 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.434071 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.434120 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.434134 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.434157 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.434170 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:20Z","lastTransitionTime":"2025-10-04T04:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.442679 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.460389 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.476221 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.494797 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-slash\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.494852 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-run-ovn-kubernetes\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.494875 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.494897 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovnkube-script-lib\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.494927 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-var-lib-openvswitch\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.494942 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-ovn\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.494957 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-log-socket\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.494974 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-run-ovn-kubernetes\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.494977 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6czkw\" (UniqueName: \"kubernetes.io/projected/11ac83cd-2981-4717-8cb4-2ca3e302461a-kube-api-access-6czkw\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.494922 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-slash\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495075 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-cni-bin\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495047 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495082 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-var-lib-openvswitch\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495141 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-cni-bin\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495093 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-log-socket\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495094 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovnkube-config\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495232 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-env-overrides\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495253 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovn-node-metrics-cert\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495126 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-ovn\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495280 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-systemd\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495313 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-openvswitch\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495340 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-systemd-units\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495358 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-cni-netd\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495384 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-kubelet\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495400 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-run-netns\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495440 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-etc-openvswitch\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495457 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-node-log\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495533 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-node-log\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495558 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-systemd\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495579 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-openvswitch\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495602 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-systemd-units\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495624 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-cni-netd\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495669 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-kubelet\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495693 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-run-netns\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.495718 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-etc-openvswitch\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.496005 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovnkube-script-lib\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.496005 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovnkube-config\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.496144 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-env-overrides\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.498754 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovn-node-metrics-cert\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.509822 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6czkw\" (UniqueName: \"kubernetes.io/projected/11ac83cd-2981-4717-8cb4-2ca3e302461a-kube-api-access-6czkw\") pod \"ovnkube-node-bw8lw\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.521150 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.521698 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.523522 4802 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d" exitCode=255 Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.527963 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.536525 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.536559 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.536572 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.536588 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.536606 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:20Z","lastTransitionTime":"2025-10-04T04:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.537394 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.548756 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.560400 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.572015 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.583061 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.592786 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.602736 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.608245 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.618213 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.631267 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.638600 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.638677 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.638692 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.638716 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.638733 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:20Z","lastTransitionTime":"2025-10-04T04:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.641057 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.645150 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.646161 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.646834 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.653274 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.666394 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.678502 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.688996 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.706315 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.715781 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.727006 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.741456 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.741503 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.741512 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.741530 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.741541 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:20Z","lastTransitionTime":"2025-10-04T04:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.746294 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.756387 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.765270 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.774898 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.788619 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.799842 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.844561 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.844623 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.844633 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.844670 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.844685 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:20Z","lastTransitionTime":"2025-10-04T04:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.852331 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.853674 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.855926 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.856627 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.857457 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.858301 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.859003 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.861327 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.862150 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.862825 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.864045 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.864822 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.867246 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.867984 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.869314 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.871809 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.872740 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.874283 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.874915 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: W1004 04:46:20.875596 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11ac83cd_2981_4717_8cb4_2ca3e302461a.slice/crio-317a0770f7ef4c98f8dabe125db8cd4fe6017e4850514539da1aa9565562d5fc WatchSource:0}: Error finding container 317a0770f7ef4c98f8dabe125db8cd4fe6017e4850514539da1aa9565562d5fc: Status 404 returned error can't find the container with id 317a0770f7ef4c98f8dabe125db8cd4fe6017e4850514539da1aa9565562d5fc Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.876344 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.876916 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.878332 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.879373 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.880280 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.881397 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.882462 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.882968 4802 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.883081 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.885601 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.886703 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.888066 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.892358 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.894088 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.895656 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.896507 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.897833 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.898538 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.899949 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.900146 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.900167 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"d5b70c63b06bbef9c1431200655eebd007a40b5fb361bbb83768099654933dad"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.900186 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fjmgk" event={"ID":"e180d740-f48b-4755-b3ad-088f40b010ed","Type":"ContainerStarted","Data":"ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.900206 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" event={"ID":"951838a5-12ca-41a9-a0b2-df95499f89ac","Type":"ContainerStarted","Data":"0f3a574a12867d85d00ae9f71ab339b1508687221fb337d4d4bd9ba8994b6e64"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.900221 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jpj5" event={"ID":"c1c56664-b32b-475a-89eb-55910da58338","Type":"ContainerStarted","Data":"1033573d6cba3a08e3cc058fa31bcccd3fb80e4674bb71e3737feb26ed10dac0"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.900235 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.900256 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.900270 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.900322 4802 scope.go:117] "RemoveContainer" containerID="adbbdd960c267e4e3724a8e065a30c4ed8235c9bd83a91a29231e78130b9d903" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.914860 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.915618 4802 scope.go:117] "RemoveContainer" containerID="3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d" Oct 04 04:46:20 crc kubenswrapper[4802]: E1004 04:46:20.916008 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.922953 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.923429 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.934356 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.948668 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.948715 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.948661 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.948728 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.948762 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.948774 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:20Z","lastTransitionTime":"2025-10-04T04:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.960227 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.971589 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.983887 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:20 crc kubenswrapper[4802]: I1004 04:46:20.993778 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.017370 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.021663 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.051414 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.051455 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.051463 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.051481 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.051495 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:21Z","lastTransitionTime":"2025-10-04T04:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.057550 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.090750 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.132626 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.153934 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.153979 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.153988 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.154005 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.154016 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:21Z","lastTransitionTime":"2025-10-04T04:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.169507 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.212791 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.252524 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.256527 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.256558 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.256567 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.256583 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.256594 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:21Z","lastTransitionTime":"2025-10-04T04:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.292520 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.331761 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.359255 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:21 crc kubenswrapper[4802]: E1004 04:46:21.359445 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.359937 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.360003 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.360021 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.360045 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.360062 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:21Z","lastTransitionTime":"2025-10-04T04:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.362070 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-895zh"] Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.362533 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-895zh" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.374568 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.385324 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.404597 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.425357 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.444210 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.462240 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.462318 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.462336 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.462365 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.462389 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:21Z","lastTransitionTime":"2025-10-04T04:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.491409 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.513994 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m7bh\" (UniqueName: \"kubernetes.io/projected/f50461a0-ea5a-4b08-a1ee-512ab8812dbf-kube-api-access-7m7bh\") pod \"node-ca-895zh\" (UID: \"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\") " pod="openshift-image-registry/node-ca-895zh" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.514063 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f50461a0-ea5a-4b08-a1ee-512ab8812dbf-host\") pod \"node-ca-895zh\" (UID: \"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\") " pod="openshift-image-registry/node-ca-895zh" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.514099 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f50461a0-ea5a-4b08-a1ee-512ab8812dbf-serviceca\") pod \"node-ca-895zh\" (UID: \"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\") " pod="openshift-image-registry/node-ca-895zh" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.530112 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.531438 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.532716 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" event={"ID":"951838a5-12ca-41a9-a0b2-df95499f89ac","Type":"ContainerStarted","Data":"b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.533057 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.533942 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jpj5" event={"ID":"c1c56664-b32b-475a-89eb-55910da58338","Type":"ContainerStarted","Data":"8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.535146 4802 generic.go:334] "Generic (PLEG): container finished" podID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerID="65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37" exitCode=0 Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.535206 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.535241 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerStarted","Data":"317a0770f7ef4c98f8dabe125db8cd4fe6017e4850514539da1aa9565562d5fc"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.536439 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.539374 4802 scope.go:117] "RemoveContainer" containerID="3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d" Oct 04 04:46:21 crc kubenswrapper[4802]: E1004 04:46:21.539523 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.565323 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.565371 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.565381 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.565398 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.565411 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:21Z","lastTransitionTime":"2025-10-04T04:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.576360 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.615743 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m7bh\" (UniqueName: \"kubernetes.io/projected/f50461a0-ea5a-4b08-a1ee-512ab8812dbf-kube-api-access-7m7bh\") pod \"node-ca-895zh\" (UID: \"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\") " pod="openshift-image-registry/node-ca-895zh" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.616030 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f50461a0-ea5a-4b08-a1ee-512ab8812dbf-host\") pod \"node-ca-895zh\" (UID: \"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\") " pod="openshift-image-registry/node-ca-895zh" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.616113 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f50461a0-ea5a-4b08-a1ee-512ab8812dbf-serviceca\") pod \"node-ca-895zh\" (UID: \"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\") " pod="openshift-image-registry/node-ca-895zh" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.616186 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f50461a0-ea5a-4b08-a1ee-512ab8812dbf-host\") pod \"node-ca-895zh\" (UID: \"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\") " pod="openshift-image-registry/node-ca-895zh" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.617452 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f50461a0-ea5a-4b08-a1ee-512ab8812dbf-serviceca\") pod \"node-ca-895zh\" (UID: \"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\") " pod="openshift-image-registry/node-ca-895zh" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.617606 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.648364 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m7bh\" (UniqueName: \"kubernetes.io/projected/f50461a0-ea5a-4b08-a1ee-512ab8812dbf-kube-api-access-7m7bh\") pod \"node-ca-895zh\" (UID: \"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\") " pod="openshift-image-registry/node-ca-895zh" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.669078 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.669170 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.669188 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.669210 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.669228 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:21Z","lastTransitionTime":"2025-10-04T04:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.672976 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adbbdd960c267e4e3724a8e065a30c4ed8235c9bd83a91a29231e78130b9d903\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:12Z\\\",\\\"message\\\":\\\"W1004 04:46:01.565957 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1004 04:46:01.566324 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759553161 cert, and key in /tmp/serving-cert-4167086973/serving-signer.crt, /tmp/serving-cert-4167086973/serving-signer.key\\\\nI1004 04:46:01.936759 1 observer_polling.go:159] Starting file observer\\\\nW1004 04:46:01.940972 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1004 04:46:01.941202 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:01.942882 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4167086973/tls.crt::/tmp/serving-cert-4167086973/tls.key\\\\\\\"\\\\nF1004 04:46:12.365078 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.677201 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-895zh" Oct 04 04:46:21 crc kubenswrapper[4802]: W1004 04:46:21.690692 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf50461a0_ea5a_4b08_a1ee_512ab8812dbf.slice/crio-4d0afc50848b2f8d3ae720b502bc95818b12e1b666ebb6f8bf2e488bd63db01d WatchSource:0}: Error finding container 4d0afc50848b2f8d3ae720b502bc95818b12e1b666ebb6f8bf2e488bd63db01d: Status 404 returned error can't find the container with id 4d0afc50848b2f8d3ae720b502bc95818b12e1b666ebb6f8bf2e488bd63db01d Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.710253 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.753899 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.772681 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.772735 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.772747 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.772767 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.772783 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:21Z","lastTransitionTime":"2025-10-04T04:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.791906 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.834316 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.870801 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.876255 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.876300 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.876311 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.876334 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.876346 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:21Z","lastTransitionTime":"2025-10-04T04:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.913165 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.919417 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:21 crc kubenswrapper[4802]: E1004 04:46:21.919668 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:46:25.919614759 +0000 UTC m=+28.327615384 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.950092 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.978959 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.979009 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.979018 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.979032 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.979041 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:21Z","lastTransitionTime":"2025-10-04T04:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:21 crc kubenswrapper[4802]: I1004 04:46:21.990614 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.020365 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.020418 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.020442 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.020470 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.020494 4802 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.020587 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:26.020564536 +0000 UTC m=+28.428565171 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.020601 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.020605 4802 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.020654 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.020676 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.020689 4802 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.020712 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:26.020692269 +0000 UTC m=+28.428692894 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.020617 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.020730 4802 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.020731 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:26.02072365 +0000 UTC m=+28.428724265 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.020773 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:26.020764921 +0000 UTC m=+28.428765546 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.034457 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.069554 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.081589 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.081664 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.081677 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.081710 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.081723 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:22Z","lastTransitionTime":"2025-10-04T04:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.116427 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.152118 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.184553 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.184593 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.184603 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.184618 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.184628 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:22Z","lastTransitionTime":"2025-10-04T04:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.190711 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.260222 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.283412 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.287812 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.287847 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.287857 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.287875 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.287886 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:22Z","lastTransitionTime":"2025-10-04T04:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.315291 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.353102 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.359292 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.359358 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.359446 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:22 crc kubenswrapper[4802]: E1004 04:46:22.359509 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.390237 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.390293 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.390306 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.390328 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.390339 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:22Z","lastTransitionTime":"2025-10-04T04:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.396409 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.434533 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.492832 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.492882 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.492897 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.492918 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.492930 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:22Z","lastTransitionTime":"2025-10-04T04:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.544539 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-895zh" event={"ID":"f50461a0-ea5a-4b08-a1ee-512ab8812dbf","Type":"ContainerStarted","Data":"a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.544597 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-895zh" event={"ID":"f50461a0-ea5a-4b08-a1ee-512ab8812dbf","Type":"ContainerStarted","Data":"4d0afc50848b2f8d3ae720b502bc95818b12e1b666ebb6f8bf2e488bd63db01d"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.547518 4802 generic.go:334] "Generic (PLEG): container finished" podID="951838a5-12ca-41a9-a0b2-df95499f89ac" containerID="b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884" exitCode=0 Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.547610 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" event={"ID":"951838a5-12ca-41a9-a0b2-df95499f89ac","Type":"ContainerDied","Data":"b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.559967 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerStarted","Data":"af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.560029 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerStarted","Data":"5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.560040 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerStarted","Data":"52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.560051 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerStarted","Data":"bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.562204 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.563627 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.579387 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.595518 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.595567 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.595581 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.595600 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.595612 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:22Z","lastTransitionTime":"2025-10-04T04:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.600046 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.613552 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.634201 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.677069 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.698723 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.698769 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.698779 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.698795 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.698806 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:22Z","lastTransitionTime":"2025-10-04T04:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.723695 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.755563 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.793392 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.801274 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.801331 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.801344 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.801365 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.801379 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:22Z","lastTransitionTime":"2025-10-04T04:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.837097 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.873220 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.905175 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.905226 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.905240 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.905259 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.905275 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:22Z","lastTransitionTime":"2025-10-04T04:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.919935 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.956682 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:22 crc kubenswrapper[4802]: I1004 04:46:22.996813 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:22Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.008006 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.008050 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.008061 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.008080 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.008095 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:23Z","lastTransitionTime":"2025-10-04T04:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.034536 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.079054 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.110615 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.110677 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.110689 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.110709 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.110723 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:23Z","lastTransitionTime":"2025-10-04T04:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.115094 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.154260 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.194357 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.214088 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.214131 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.214142 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.214159 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.214171 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:23Z","lastTransitionTime":"2025-10-04T04:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.232733 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.275883 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.316940 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.316992 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.317008 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.317030 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.317043 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:23Z","lastTransitionTime":"2025-10-04T04:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.331760 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.354844 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.358924 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:23 crc kubenswrapper[4802]: E1004 04:46:23.359096 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.394292 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.419925 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.419968 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.419977 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.419993 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.420003 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:23Z","lastTransitionTime":"2025-10-04T04:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.436153 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.476890 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.514539 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.522395 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.522436 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.522446 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.522464 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.522476 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:23Z","lastTransitionTime":"2025-10-04T04:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.555466 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.568286 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" event={"ID":"951838a5-12ca-41a9-a0b2-df95499f89ac","Type":"ContainerStarted","Data":"c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580"} Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.572663 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerStarted","Data":"a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff"} Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.572737 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerStarted","Data":"0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78"} Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.595891 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.625133 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.625189 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.625203 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.625228 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.625245 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:23Z","lastTransitionTime":"2025-10-04T04:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.635232 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.689834 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.721947 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.728423 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.728481 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.728494 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.728548 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.728574 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:23Z","lastTransitionTime":"2025-10-04T04:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.752374 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.796092 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.831335 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.831387 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.831399 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.831423 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.831438 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:23Z","lastTransitionTime":"2025-10-04T04:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.835409 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.890935 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.917170 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.934166 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.934210 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.934224 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.934244 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.934261 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:23Z","lastTransitionTime":"2025-10-04T04:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.957111 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:23 crc kubenswrapper[4802]: I1004 04:46:23.995067 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:23Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.036837 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.037091 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.037151 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.037210 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.037267 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:24Z","lastTransitionTime":"2025-10-04T04:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.037704 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.074281 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.113477 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.140119 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.140595 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.140859 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.141124 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.141307 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:24Z","lastTransitionTime":"2025-10-04T04:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.155474 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.203670 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.235454 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.245153 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.245500 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.245671 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.245765 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.245860 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:24Z","lastTransitionTime":"2025-10-04T04:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.349050 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.349125 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.349145 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.349173 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.349194 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:24Z","lastTransitionTime":"2025-10-04T04:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.359073 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.359070 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:24 crc kubenswrapper[4802]: E1004 04:46:24.359279 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:24 crc kubenswrapper[4802]: E1004 04:46:24.359387 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.452424 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.452889 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.452900 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.452920 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.452951 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:24Z","lastTransitionTime":"2025-10-04T04:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.555186 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.555250 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.555268 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.555295 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.555316 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:24Z","lastTransitionTime":"2025-10-04T04:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.579933 4802 generic.go:334] "Generic (PLEG): container finished" podID="951838a5-12ca-41a9-a0b2-df95499f89ac" containerID="c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580" exitCode=0 Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.580015 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" event={"ID":"951838a5-12ca-41a9-a0b2-df95499f89ac","Type":"ContainerDied","Data":"c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580"} Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.607170 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.625196 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.639607 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.654444 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.658366 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.658412 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.658425 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.658444 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.658457 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:24Z","lastTransitionTime":"2025-10-04T04:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.669245 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.686259 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.700938 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.727802 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.748616 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.764985 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.765021 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.765033 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.765051 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.765067 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:24Z","lastTransitionTime":"2025-10-04T04:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.771516 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.802206 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.814962 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.828247 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.841623 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.853046 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:24Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.873887 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.873938 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.873953 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.873976 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.873993 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:24Z","lastTransitionTime":"2025-10-04T04:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.976532 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.976586 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.976601 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.976627 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:24 crc kubenswrapper[4802]: I1004 04:46:24.976680 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:24Z","lastTransitionTime":"2025-10-04T04:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.080494 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.080553 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.080571 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.080603 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.080625 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:25Z","lastTransitionTime":"2025-10-04T04:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.183357 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.183938 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.183950 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.183968 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.183980 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:25Z","lastTransitionTime":"2025-10-04T04:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.286661 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.286718 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.286731 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.286765 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.286782 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:25Z","lastTransitionTime":"2025-10-04T04:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.358817 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:25 crc kubenswrapper[4802]: E1004 04:46:25.359053 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.389890 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.389934 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.389946 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.389966 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.389979 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:25Z","lastTransitionTime":"2025-10-04T04:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.492950 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.492998 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.493011 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.493033 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.493069 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:25Z","lastTransitionTime":"2025-10-04T04:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.586244 4802 generic.go:334] "Generic (PLEG): container finished" podID="951838a5-12ca-41a9-a0b2-df95499f89ac" containerID="681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee" exitCode=0 Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.586310 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" event={"ID":"951838a5-12ca-41a9-a0b2-df95499f89ac","Type":"ContainerDied","Data":"681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee"} Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.590587 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerStarted","Data":"89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad"} Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.594708 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.594750 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.594762 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.594777 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.594788 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:25Z","lastTransitionTime":"2025-10-04T04:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.616545 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.636822 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.649147 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.665296 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.678806 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.697545 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.697586 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.697594 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.697610 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.697623 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:25Z","lastTransitionTime":"2025-10-04T04:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.698620 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.714389 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.734997 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.752679 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.766770 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.782543 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.800364 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.801840 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.801878 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.801887 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.801905 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.801916 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:25Z","lastTransitionTime":"2025-10-04T04:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.814791 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.829994 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.850667 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:25Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.904771 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.904825 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.904837 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.904864 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.904881 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:25Z","lastTransitionTime":"2025-10-04T04:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:25 crc kubenswrapper[4802]: I1004 04:46:25.969393 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:25 crc kubenswrapper[4802]: E1004 04:46:25.969682 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:46:33.969631699 +0000 UTC m=+36.377632324 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.007397 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.007445 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.007454 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.007474 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.007485 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:26Z","lastTransitionTime":"2025-10-04T04:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.070452 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.070509 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.070543 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.070583 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.070744 4802 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.070799 4802 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.071132 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.071163 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.070814 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:34.070795782 +0000 UTC m=+36.478796407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.071181 4802 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.070840 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.071241 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.071248 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:34.071205933 +0000 UTC m=+36.479206688 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.071281 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:34.071266435 +0000 UTC m=+36.479267100 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.071251 4802 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.071347 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:34.071336087 +0000 UTC m=+36.479336742 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.109905 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.109960 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.109971 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.109987 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.110000 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:26Z","lastTransitionTime":"2025-10-04T04:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.212611 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.212683 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.212696 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.212715 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.212726 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:26Z","lastTransitionTime":"2025-10-04T04:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.316394 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.316714 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.316800 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.316921 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.316991 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:26Z","lastTransitionTime":"2025-10-04T04:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.358977 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.359164 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.359321 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:26 crc kubenswrapper[4802]: E1004 04:46:26.359482 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.420239 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.420872 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.421241 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.421405 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.421615 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:26Z","lastTransitionTime":"2025-10-04T04:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.525214 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.525271 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.525285 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.525304 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.525317 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:26Z","lastTransitionTime":"2025-10-04T04:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.598746 4802 generic.go:334] "Generic (PLEG): container finished" podID="951838a5-12ca-41a9-a0b2-df95499f89ac" containerID="eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533" exitCode=0 Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.598850 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" event={"ID":"951838a5-12ca-41a9-a0b2-df95499f89ac","Type":"ContainerDied","Data":"eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533"} Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.623219 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.628515 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.628590 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.628602 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.628622 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.628636 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:26Z","lastTransitionTime":"2025-10-04T04:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.640832 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.656201 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.673484 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.689793 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.706990 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.723312 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.731617 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.731739 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.731766 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.731799 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.731822 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:26Z","lastTransitionTime":"2025-10-04T04:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.739931 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.753960 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.766030 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.781955 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.797846 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.812517 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.828561 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.835221 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.835293 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.835304 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.835325 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.835338 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:26Z","lastTransitionTime":"2025-10-04T04:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.848968 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:26Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.938177 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.938268 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.938286 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.938305 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:26 crc kubenswrapper[4802]: I1004 04:46:26.938316 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:26Z","lastTransitionTime":"2025-10-04T04:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.041152 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.041190 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.041202 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.041217 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.041227 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:27Z","lastTransitionTime":"2025-10-04T04:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.143799 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.143849 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.143858 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.143875 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.143887 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:27Z","lastTransitionTime":"2025-10-04T04:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.247140 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.247187 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.247198 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.247215 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.247227 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:27Z","lastTransitionTime":"2025-10-04T04:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.350743 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.350798 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.350815 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.350836 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.350853 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:27Z","lastTransitionTime":"2025-10-04T04:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.359463 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:27 crc kubenswrapper[4802]: E1004 04:46:27.359777 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.454543 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.454599 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.454684 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.454713 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.454728 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:27Z","lastTransitionTime":"2025-10-04T04:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.558488 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.558895 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.558904 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.558919 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.558930 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:27Z","lastTransitionTime":"2025-10-04T04:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.607392 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" event={"ID":"951838a5-12ca-41a9-a0b2-df95499f89ac","Type":"ContainerStarted","Data":"df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318"} Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.630032 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.643031 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.658057 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.661788 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.661825 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.661835 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.661852 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.661863 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:27Z","lastTransitionTime":"2025-10-04T04:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.671154 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.699369 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.710290 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.728728 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.746969 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.757869 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.764077 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.764115 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.764125 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.764145 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.764157 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:27Z","lastTransitionTime":"2025-10-04T04:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.771878 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.784160 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.798424 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.812015 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.823394 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.839145 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:27Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.867204 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.867261 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.867274 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.867298 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.867311 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:27Z","lastTransitionTime":"2025-10-04T04:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.969919 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.969961 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.969987 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.970006 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:27 crc kubenswrapper[4802]: I1004 04:46:27.970021 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:27Z","lastTransitionTime":"2025-10-04T04:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.075355 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.075416 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.075428 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.075451 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.075464 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.178591 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.178655 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.178667 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.178687 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.178697 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.282054 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.282100 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.282110 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.282133 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.282145 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.359278 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.359278 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:28 crc kubenswrapper[4802]: E1004 04:46:28.359448 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:28 crc kubenswrapper[4802]: E1004 04:46:28.359787 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.380015 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.385287 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.385324 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.385340 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.385358 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.385373 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.399447 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.418616 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.451242 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.466870 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.484343 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.488089 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.488121 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.488129 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.488146 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.488158 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.495585 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.519686 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.540597 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.556709 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.569393 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.578631 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.590063 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.592610 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.592663 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.592675 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.592689 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.592725 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.601709 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.613697 4802 generic.go:334] "Generic (PLEG): container finished" podID="951838a5-12ca-41a9-a0b2-df95499f89ac" containerID="df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318" exitCode=0 Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.613773 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" event={"ID":"951838a5-12ca-41a9-a0b2-df95499f89ac","Type":"ContainerDied","Data":"df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.617369 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.624224 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerStarted","Data":"54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.624524 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.624547 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.633745 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.647870 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.649214 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.649270 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.649292 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.649319 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.649338 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.662566 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: E1004 04:46:28.666337 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.669769 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.669814 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.669826 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.669846 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.669858 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.676277 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: E1004 04:46:28.682530 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.686142 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.686203 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.686218 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.686239 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.686255 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.691203 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.693891 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.694945 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:28 crc kubenswrapper[4802]: E1004 04:46:28.701955 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.705178 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.706192 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.706277 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.706290 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.706314 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.706329 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: E1004 04:46:28.723404 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.726728 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.730419 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.730463 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.730476 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.730507 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.730519 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.741065 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: E1004 04:46:28.744776 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: E1004 04:46:28.744912 4802 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.746882 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.746910 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.746919 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.746936 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.746948 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.756497 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.770257 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.782069 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.795616 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.818853 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.832753 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.843514 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.850098 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.850137 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.850146 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.850162 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.850174 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.867491 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.885330 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.898043 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.916235 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.933491 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.951755 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.952990 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.953027 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.953039 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.953057 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.953069 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:28Z","lastTransitionTime":"2025-10-04T04:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.973666 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:28 crc kubenswrapper[4802]: I1004 04:46:28.989070 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.006731 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.022005 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.044496 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.056728 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.057178 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.057197 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.057218 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.057233 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:29Z","lastTransitionTime":"2025-10-04T04:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.067561 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.082371 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.100601 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.118379 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.159288 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.159321 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.159331 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.159348 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.159358 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:29Z","lastTransitionTime":"2025-10-04T04:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.262227 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.262277 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.262287 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.262303 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.262316 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:29Z","lastTransitionTime":"2025-10-04T04:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.352308 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.353095 4802 scope.go:117] "RemoveContainer" containerID="3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d" Oct 04 04:46:29 crc kubenswrapper[4802]: E1004 04:46:29.353292 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.359074 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:29 crc kubenswrapper[4802]: E1004 04:46:29.359174 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.365174 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.365209 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.365218 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.365234 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.365244 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:29Z","lastTransitionTime":"2025-10-04T04:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.468762 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.468815 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.468832 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.468852 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.468869 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:29Z","lastTransitionTime":"2025-10-04T04:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.573753 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.573821 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.573841 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.573871 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.573891 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:29Z","lastTransitionTime":"2025-10-04T04:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.632529 4802 generic.go:334] "Generic (PLEG): container finished" podID="951838a5-12ca-41a9-a0b2-df95499f89ac" containerID="1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8" exitCode=0 Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.632725 4802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.632829 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" event={"ID":"951838a5-12ca-41a9-a0b2-df95499f89ac","Type":"ContainerDied","Data":"1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8"} Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.650589 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.669672 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.676759 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.676795 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.676804 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.676822 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.676833 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:29Z","lastTransitionTime":"2025-10-04T04:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.684969 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.702024 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.716343 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.731155 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.754890 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.765891 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.791264 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.808057 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.808111 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.808128 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.808151 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.808166 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:29Z","lastTransitionTime":"2025-10-04T04:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.824087 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.855531 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.870976 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.890224 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.905732 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.910412 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.910444 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.910453 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.910471 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.910481 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:29Z","lastTransitionTime":"2025-10-04T04:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:29 crc kubenswrapper[4802]: I1004 04:46:29.919128 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:29Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.014472 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.014537 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.014553 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.014578 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.014594 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:30Z","lastTransitionTime":"2025-10-04T04:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.117954 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.118036 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.118062 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.118095 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.118122 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:30Z","lastTransitionTime":"2025-10-04T04:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.222193 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.222276 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.222323 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.222374 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.222403 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:30Z","lastTransitionTime":"2025-10-04T04:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.325027 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.325070 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.325078 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.325095 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.325106 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:30Z","lastTransitionTime":"2025-10-04T04:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.359523 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.359528 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:30 crc kubenswrapper[4802]: E1004 04:46:30.359725 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:30 crc kubenswrapper[4802]: E1004 04:46:30.359797 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.428164 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.428235 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.428343 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.428386 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.428414 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:30Z","lastTransitionTime":"2025-10-04T04:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.531283 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.531341 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.531357 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.531386 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.531405 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:30Z","lastTransitionTime":"2025-10-04T04:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.634597 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.634633 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.634662 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.634678 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.634687 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:30Z","lastTransitionTime":"2025-10-04T04:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.641181 4802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.642629 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" event={"ID":"951838a5-12ca-41a9-a0b2-df95499f89ac","Type":"ContainerStarted","Data":"b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d"} Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.664968 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.687409 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.705009 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.720596 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.734838 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.737511 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.737546 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.737556 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.737572 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.737582 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:30Z","lastTransitionTime":"2025-10-04T04:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.752783 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.766348 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.778164 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.794476 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.819454 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.835380 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.840964 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.841007 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.841019 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.841038 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.841051 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:30Z","lastTransitionTime":"2025-10-04T04:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.854142 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.872859 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.885166 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.904972 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.944269 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.944578 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.944711 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.944788 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:30 crc kubenswrapper[4802]: I1004 04:46:30.944893 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:30Z","lastTransitionTime":"2025-10-04T04:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.048305 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.049175 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.049496 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.049821 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.050065 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:31Z","lastTransitionTime":"2025-10-04T04:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.154114 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.154182 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.154200 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.154241 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.154258 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:31Z","lastTransitionTime":"2025-10-04T04:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.257391 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.257434 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.257445 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.257465 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.257477 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:31Z","lastTransitionTime":"2025-10-04T04:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.358797 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:31 crc kubenswrapper[4802]: E1004 04:46:31.359053 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.360712 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.360795 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.360819 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.360847 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.360866 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:31Z","lastTransitionTime":"2025-10-04T04:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.464266 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.464324 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.464341 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.464360 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.464372 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:31Z","lastTransitionTime":"2025-10-04T04:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.469962 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p"] Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.470544 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.472773 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.473437 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.490102 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.503469 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.523745 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.532529 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6a371e2b-4a47-45ba-9141-dcd616fa19be-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5rx2p\" (UID: \"6a371e2b-4a47-45ba-9141-dcd616fa19be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.532585 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6a371e2b-4a47-45ba-9141-dcd616fa19be-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5rx2p\" (UID: \"6a371e2b-4a47-45ba-9141-dcd616fa19be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.532606 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvmdf\" (UniqueName: \"kubernetes.io/projected/6a371e2b-4a47-45ba-9141-dcd616fa19be-kube-api-access-kvmdf\") pod \"ovnkube-control-plane-749d76644c-5rx2p\" (UID: \"6a371e2b-4a47-45ba-9141-dcd616fa19be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.532813 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6a371e2b-4a47-45ba-9141-dcd616fa19be-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5rx2p\" (UID: \"6a371e2b-4a47-45ba-9141-dcd616fa19be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.538434 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.553203 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.565658 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.567475 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.567540 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.567559 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.567590 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.567612 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:31Z","lastTransitionTime":"2025-10-04T04:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.580019 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.595155 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.609100 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.625603 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.634044 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6a371e2b-4a47-45ba-9141-dcd616fa19be-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5rx2p\" (UID: \"6a371e2b-4a47-45ba-9141-dcd616fa19be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.634092 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvmdf\" (UniqueName: \"kubernetes.io/projected/6a371e2b-4a47-45ba-9141-dcd616fa19be-kube-api-access-kvmdf\") pod \"ovnkube-control-plane-749d76644c-5rx2p\" (UID: \"6a371e2b-4a47-45ba-9141-dcd616fa19be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.634115 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6a371e2b-4a47-45ba-9141-dcd616fa19be-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5rx2p\" (UID: \"6a371e2b-4a47-45ba-9141-dcd616fa19be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.634180 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6a371e2b-4a47-45ba-9141-dcd616fa19be-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5rx2p\" (UID: \"6a371e2b-4a47-45ba-9141-dcd616fa19be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.634829 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6a371e2b-4a47-45ba-9141-dcd616fa19be-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5rx2p\" (UID: \"6a371e2b-4a47-45ba-9141-dcd616fa19be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.635187 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6a371e2b-4a47-45ba-9141-dcd616fa19be-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5rx2p\" (UID: \"6a371e2b-4a47-45ba-9141-dcd616fa19be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.642567 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6a371e2b-4a47-45ba-9141-dcd616fa19be-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5rx2p\" (UID: \"6a371e2b-4a47-45ba-9141-dcd616fa19be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.647708 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/0.log" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.650253 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.653597 4802 generic.go:334] "Generic (PLEG): container finished" podID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerID="54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6" exitCode=1 Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.653812 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvmdf\" (UniqueName: \"kubernetes.io/projected/6a371e2b-4a47-45ba-9141-dcd616fa19be-kube-api-access-kvmdf\") pod \"ovnkube-control-plane-749d76644c-5rx2p\" (UID: \"6a371e2b-4a47-45ba-9141-dcd616fa19be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.653940 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6"} Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.656045 4802 scope.go:117] "RemoveContainer" containerID="54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.665165 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.671781 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.671842 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.671856 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.671880 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.671897 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:31Z","lastTransitionTime":"2025-10-04T04:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.696629 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.716999 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.729231 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.743592 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.761600 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.774983 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.775039 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.775056 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.775077 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.775090 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:31Z","lastTransitionTime":"2025-10-04T04:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.775838 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.786509 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.787753 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.805165 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.827538 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"I1004 04:46:31.239318 6071 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:31.239325 6071 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:31.239374 6071 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:31.240384 6071 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:31.240398 6071 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:31.240403 6071 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:31.240408 6071 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:31.241504 6071 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:31.241557 6071 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1004 04:46:31.241567 6071 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1004 04:46:31.241581 6071 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:31.241623 6071 factory.go:656] Stopping watch factory\\\\nI1004 04:46:31.241671 6071 ovnkube.go:599] Stopped ovnkube\\\\nI1004 04:46:31.241685 6071 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1004 04:46:31.241689 6071 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1004 04:46:31.241731 6071 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.841825 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.864970 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.880112 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.880441 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.880472 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.880514 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.880534 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:31Z","lastTransitionTime":"2025-10-04T04:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.882650 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.895068 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.907576 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.924397 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.938703 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.954461 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.968233 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.981097 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.982991 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.983022 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.983030 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.983046 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.983062 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:31Z","lastTransitionTime":"2025-10-04T04:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:31 crc kubenswrapper[4802]: I1004 04:46:31.994411 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:31Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.087334 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.087412 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.087426 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.087443 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.087482 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:32Z","lastTransitionTime":"2025-10-04T04:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.190798 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.191470 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.191481 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.191500 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.191511 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:32Z","lastTransitionTime":"2025-10-04T04:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.294708 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.294767 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.294782 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.294803 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.294849 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:32Z","lastTransitionTime":"2025-10-04T04:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.359408 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.359439 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:32 crc kubenswrapper[4802]: E1004 04:46:32.359575 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:32 crc kubenswrapper[4802]: E1004 04:46:32.359689 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.397344 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.397392 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.397405 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.397423 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.397435 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:32Z","lastTransitionTime":"2025-10-04T04:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.499912 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.499951 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.499964 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.499996 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.500011 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:32Z","lastTransitionTime":"2025-10-04T04:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.602195 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.602241 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.602254 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.602272 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.602288 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:32Z","lastTransitionTime":"2025-10-04T04:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.666339 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" event={"ID":"6a371e2b-4a47-45ba-9141-dcd616fa19be","Type":"ContainerStarted","Data":"39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68"} Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.666417 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" event={"ID":"6a371e2b-4a47-45ba-9141-dcd616fa19be","Type":"ContainerStarted","Data":"6959cffd40d37bb3bb04276ba5917b16468cd3d3970f1e4a3f7c081ebc2cfd4b"} Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.669631 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/0.log" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.672916 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerStarted","Data":"9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773"} Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.673131 4802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.690923 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.705309 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.705361 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.705375 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.705406 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.705422 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:32Z","lastTransitionTime":"2025-10-04T04:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.712879 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.761387 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.777215 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.798196 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"I1004 04:46:31.239318 6071 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:31.239325 6071 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:31.239374 6071 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:31.240384 6071 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:31.240398 6071 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:31.240403 6071 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:31.240408 6071 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:31.241504 6071 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:31.241557 6071 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1004 04:46:31.241567 6071 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1004 04:46:31.241581 6071 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:31.241623 6071 factory.go:656] Stopping watch factory\\\\nI1004 04:46:31.241671 6071 ovnkube.go:599] Stopped ovnkube\\\\nI1004 04:46:31.241685 6071 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1004 04:46:31.241689 6071 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1004 04:46:31.241731 6071 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.808878 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.808918 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.808931 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.808951 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.808963 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:32Z","lastTransitionTime":"2025-10-04T04:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.812717 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.829665 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.848779 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.864960 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.887923 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.903741 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.911980 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.912024 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.912034 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.912053 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.912064 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:32Z","lastTransitionTime":"2025-10-04T04:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.915725 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.928743 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.943570 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.956061 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.956890 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-n27xq"] Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.957499 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:32 crc kubenswrapper[4802]: E1004 04:46:32.957575 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.971434 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:32 crc kubenswrapper[4802]: I1004 04:46:32.991718 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:32Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.008163 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.014762 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.014799 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.014809 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.014829 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.014843 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:33Z","lastTransitionTime":"2025-10-04T04:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.019087 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.032075 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.050025 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c27hr\" (UniqueName: \"kubernetes.io/projected/0d189ff1-3446-47fe-bcea-6b09e72a4567-kube-api-access-c27hr\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.050071 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.051943 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.064748 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.084130 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.098880 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.114444 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.120535 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.120612 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.120627 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.120674 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.120694 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:33Z","lastTransitionTime":"2025-10-04T04:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.135109 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.151246 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.151334 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c27hr\" (UniqueName: \"kubernetes.io/projected/0d189ff1-3446-47fe-bcea-6b09e72a4567-kube-api-access-c27hr\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:33 crc kubenswrapper[4802]: E1004 04:46:33.151548 4802 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:46:33 crc kubenswrapper[4802]: E1004 04:46:33.151692 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs podName:0d189ff1-3446-47fe-bcea-6b09e72a4567 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:33.651662362 +0000 UTC m=+36.059663147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs") pod "network-metrics-daemon-n27xq" (UID: "0d189ff1-3446-47fe-bcea-6b09e72a4567") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.153020 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.167087 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.172591 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c27hr\" (UniqueName: \"kubernetes.io/projected/0d189ff1-3446-47fe-bcea-6b09e72a4567-kube-api-access-c27hr\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.183409 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.201881 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.219100 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.223835 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.223883 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.223896 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.223915 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.223927 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:33Z","lastTransitionTime":"2025-10-04T04:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.244416 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"I1004 04:46:31.239318 6071 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:31.239325 6071 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:31.239374 6071 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:31.240384 6071 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:31.240398 6071 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:31.240403 6071 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:31.240408 6071 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:31.241504 6071 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:31.241557 6071 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1004 04:46:31.241567 6071 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1004 04:46:31.241581 6071 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:31.241623 6071 factory.go:656] Stopping watch factory\\\\nI1004 04:46:31.241671 6071 ovnkube.go:599] Stopped ovnkube\\\\nI1004 04:46:31.241685 6071 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1004 04:46:31.241689 6071 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1004 04:46:31.241731 6071 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.255985 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.326115 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.326155 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.326164 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.326178 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.326189 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:33Z","lastTransitionTime":"2025-10-04T04:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.358846 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:33 crc kubenswrapper[4802]: E1004 04:46:33.359028 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.429507 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.429584 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.429595 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.429616 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.429631 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:33Z","lastTransitionTime":"2025-10-04T04:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.531932 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.531981 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.531991 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.532008 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.532020 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:33Z","lastTransitionTime":"2025-10-04T04:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.634431 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.634484 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.634500 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.634523 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.634537 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:33Z","lastTransitionTime":"2025-10-04T04:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.657142 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:33 crc kubenswrapper[4802]: E1004 04:46:33.657330 4802 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:46:33 crc kubenswrapper[4802]: E1004 04:46:33.657409 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs podName:0d189ff1-3446-47fe-bcea-6b09e72a4567 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:34.657390072 +0000 UTC m=+37.065390697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs") pod "network-metrics-daemon-n27xq" (UID: "0d189ff1-3446-47fe-bcea-6b09e72a4567") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.679431 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" event={"ID":"6a371e2b-4a47-45ba-9141-dcd616fa19be","Type":"ContainerStarted","Data":"daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8"} Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.681603 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/1.log" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.682355 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/0.log" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.685377 4802 generic.go:334] "Generic (PLEG): container finished" podID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerID="9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773" exitCode=1 Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.685416 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773"} Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.685449 4802 scope.go:117] "RemoveContainer" containerID="54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.686452 4802 scope.go:117] "RemoveContainer" containerID="9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773" Oct 04 04:46:33 crc kubenswrapper[4802]: E1004 04:46:33.686699 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.703407 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.715213 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.727796 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.736714 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.736752 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.736763 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.736781 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.736792 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:33Z","lastTransitionTime":"2025-10-04T04:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.742175 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.755104 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.773192 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"I1004 04:46:31.239318 6071 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:31.239325 6071 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:31.239374 6071 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:31.240384 6071 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:31.240398 6071 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:31.240403 6071 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:31.240408 6071 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:31.241504 6071 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:31.241557 6071 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1004 04:46:31.241567 6071 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1004 04:46:31.241581 6071 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:31.241623 6071 factory.go:656] Stopping watch factory\\\\nI1004 04:46:31.241671 6071 ovnkube.go:599] Stopped ovnkube\\\\nI1004 04:46:31.241685 6071 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1004 04:46:31.241689 6071 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1004 04:46:31.241731 6071 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:33Z\\\",\\\"message\\\":\\\"04:46:32.794847 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1004 04:46:32.794919 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1004 04:46:32.794980 6258 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1004 04:46:32.795047 6258 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1004 04:46:32.795128 6258 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:32.795191 6258 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:32.795028 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:32.795168 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:32.795280 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:32.795292 6258 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:32.795351 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:32.795380 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:32.795428 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:32.795460 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1004 04:46:32.795510 6258 factory.go:656] Stopping watch factory\\\\nI1004 04:46:32.795537 6258 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.791855 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.806146 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.816046 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.827234 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.839808 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.839851 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.839863 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.839882 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.839895 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:33Z","lastTransitionTime":"2025-10-04T04:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.842696 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.856670 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.871223 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.883161 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.897929 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.914020 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.927306 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.947783 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.947834 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.947845 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.947864 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:33 crc kubenswrapper[4802]: I1004 04:46:33.947876 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:33Z","lastTransitionTime":"2025-10-04T04:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.050457 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.050506 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.050516 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.050535 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.050549 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:34Z","lastTransitionTime":"2025-10-04T04:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.061016 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.061458 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:46:50.061424035 +0000 UTC m=+52.469424660 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.153608 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.153701 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.153719 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.153738 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.153753 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:34Z","lastTransitionTime":"2025-10-04T04:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.162517 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.162558 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.162609 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.162662 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.162697 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.162728 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.162740 4802 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.162746 4802 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.162784 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:50.162767362 +0000 UTC m=+52.570767987 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.162804 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:50.162793713 +0000 UTC m=+52.570794328 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.162802 4802 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.162867 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:50.162846874 +0000 UTC m=+52.570847519 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.162940 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.163009 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.163031 4802 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.163141 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:50.163107421 +0000 UTC m=+52.571108096 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.256806 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.256857 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.256872 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.256893 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.256907 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:34Z","lastTransitionTime":"2025-10-04T04:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.358799 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.358907 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.358957 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.358805 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.359066 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.359144 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.360418 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.360523 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.360611 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.360723 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.360799 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:34Z","lastTransitionTime":"2025-10-04T04:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.463690 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.463755 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.463772 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.463794 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.463810 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:34Z","lastTransitionTime":"2025-10-04T04:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.567398 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.567509 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.567523 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.567544 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.567559 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:34Z","lastTransitionTime":"2025-10-04T04:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.668487 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.668626 4802 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:46:34 crc kubenswrapper[4802]: E1004 04:46:34.668708 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs podName:0d189ff1-3446-47fe-bcea-6b09e72a4567 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:36.668691176 +0000 UTC m=+39.076691801 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs") pod "network-metrics-daemon-n27xq" (UID: "0d189ff1-3446-47fe-bcea-6b09e72a4567") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.670630 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.670725 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.670739 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.670761 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.670774 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:34Z","lastTransitionTime":"2025-10-04T04:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.692218 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/1.log" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.713082 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.735607 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.753787 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.770898 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.774203 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.774262 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.774284 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.774311 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.774330 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:34Z","lastTransitionTime":"2025-10-04T04:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.792848 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.817443 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.837591 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.851770 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.871918 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.876919 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.876973 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.876985 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.877005 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.877019 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:34Z","lastTransitionTime":"2025-10-04T04:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.903913 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ee9ce9d9a270c3c254d8f8c26236f49fe06c0d2d52a64c0ad5f813829531c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"message\\\":\\\"I1004 04:46:31.239318 6071 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:31.239325 6071 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:31.239374 6071 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:31.240384 6071 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:31.240398 6071 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:31.240403 6071 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:31.240408 6071 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:31.241504 6071 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:31.241557 6071 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1004 04:46:31.241567 6071 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1004 04:46:31.241581 6071 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:31.241623 6071 factory.go:656] Stopping watch factory\\\\nI1004 04:46:31.241671 6071 ovnkube.go:599] Stopped ovnkube\\\\nI1004 04:46:31.241685 6071 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1004 04:46:31.241689 6071 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1004 04:46:31.241731 6071 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:33Z\\\",\\\"message\\\":\\\"04:46:32.794847 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1004 04:46:32.794919 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1004 04:46:32.794980 6258 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1004 04:46:32.795047 6258 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1004 04:46:32.795128 6258 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:32.795191 6258 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:32.795028 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:32.795168 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:32.795280 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:32.795292 6258 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:32.795351 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:32.795380 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:32.795428 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:32.795460 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1004 04:46:32.795510 6258 factory.go:656] Stopping watch factory\\\\nI1004 04:46:32.795537 6258 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.920168 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.940762 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.961471 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.976148 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:34Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.980401 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.980430 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.980440 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.980458 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:34 crc kubenswrapper[4802]: I1004 04:46:34.980473 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:34Z","lastTransitionTime":"2025-10-04T04:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.011365 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:35Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.028204 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:35Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.045595 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:35Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.083809 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.083860 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.083875 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.083929 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.083944 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:35Z","lastTransitionTime":"2025-10-04T04:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.186343 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.186386 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.186396 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.186412 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.186423 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:35Z","lastTransitionTime":"2025-10-04T04:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.289901 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.289958 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.289977 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.290002 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.290021 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:35Z","lastTransitionTime":"2025-10-04T04:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.361213 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:35 crc kubenswrapper[4802]: E1004 04:46:35.361423 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.393164 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.393256 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.393283 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.393311 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.393332 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:35Z","lastTransitionTime":"2025-10-04T04:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.496984 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.497069 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.497098 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.497135 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.497164 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:35Z","lastTransitionTime":"2025-10-04T04:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.601182 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.601220 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.601231 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.601248 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.601260 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:35Z","lastTransitionTime":"2025-10-04T04:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.703392 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.704202 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.704219 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.704236 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.704253 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:35Z","lastTransitionTime":"2025-10-04T04:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.808224 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.808306 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.808331 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.808361 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.808380 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:35Z","lastTransitionTime":"2025-10-04T04:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.911580 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.911627 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.911636 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.911676 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:35 crc kubenswrapper[4802]: I1004 04:46:35.911694 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:35Z","lastTransitionTime":"2025-10-04T04:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.017944 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.017994 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.018011 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.018030 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.018039 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:36Z","lastTransitionTime":"2025-10-04T04:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.121349 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.121404 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.121415 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.121440 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.121453 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:36Z","lastTransitionTime":"2025-10-04T04:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.225140 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.225206 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.225226 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.225253 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.225289 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:36Z","lastTransitionTime":"2025-10-04T04:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.329282 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.329377 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.329403 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.329436 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.329459 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:36Z","lastTransitionTime":"2025-10-04T04:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.359461 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.359461 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:36 crc kubenswrapper[4802]: E1004 04:46:36.359717 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.359740 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:36 crc kubenswrapper[4802]: E1004 04:46:36.359829 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:36 crc kubenswrapper[4802]: E1004 04:46:36.360041 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.433391 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.433442 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.433452 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.433471 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.433488 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:36Z","lastTransitionTime":"2025-10-04T04:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.536932 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.537007 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.537029 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.537058 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.537084 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:36Z","lastTransitionTime":"2025-10-04T04:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.640622 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.640755 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.640785 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.640822 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.640843 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:36Z","lastTransitionTime":"2025-10-04T04:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.692496 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.693214 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:36 crc kubenswrapper[4802]: E1004 04:46:36.693430 4802 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:46:36 crc kubenswrapper[4802]: E1004 04:46:36.693527 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs podName:0d189ff1-3446-47fe-bcea-6b09e72a4567 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:40.693498796 +0000 UTC m=+43.101499451 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs") pod "network-metrics-daemon-n27xq" (UID: "0d189ff1-3446-47fe-bcea-6b09e72a4567") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.693680 4802 scope.go:117] "RemoveContainer" containerID="9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773" Oct 04 04:46:36 crc kubenswrapper[4802]: E1004 04:46:36.693902 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.718520 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.744079 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.744130 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.744146 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.744173 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.744193 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:36Z","lastTransitionTime":"2025-10-04T04:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.745241 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.759337 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.772406 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.786150 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.801841 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.816412 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.844463 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:33Z\\\",\\\"message\\\":\\\"04:46:32.794847 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1004 04:46:32.794919 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1004 04:46:32.794980 6258 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1004 04:46:32.795047 6258 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1004 04:46:32.795128 6258 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:32.795191 6258 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:32.795028 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:32.795168 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:32.795280 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:32.795292 6258 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:32.795351 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:32.795380 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:32.795428 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:32.795460 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1004 04:46:32.795510 6258 factory.go:656] Stopping watch factory\\\\nI1004 04:46:32.795537 6258 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.846720 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.846757 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.846769 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.846785 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.846796 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:36Z","lastTransitionTime":"2025-10-04T04:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.858573 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.880551 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.895506 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.907749 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.921160 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.934480 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.945399 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.949313 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.949365 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.949382 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.949405 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.949421 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:36Z","lastTransitionTime":"2025-10-04T04:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.961352 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:36 crc kubenswrapper[4802]: I1004 04:46:36.975489 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:36Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.052836 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.052893 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.052908 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.052933 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.052952 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:37Z","lastTransitionTime":"2025-10-04T04:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.156816 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.156883 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.156901 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.156927 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.156945 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:37Z","lastTransitionTime":"2025-10-04T04:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.260897 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.260982 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.261012 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.261041 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.261063 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:37Z","lastTransitionTime":"2025-10-04T04:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.359754 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:37 crc kubenswrapper[4802]: E1004 04:46:37.360008 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.363847 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.363871 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.363882 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.363898 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.363910 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:37Z","lastTransitionTime":"2025-10-04T04:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.467020 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.467068 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.467081 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.467101 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.467114 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:37Z","lastTransitionTime":"2025-10-04T04:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.569773 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.569844 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.569857 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.569879 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.569893 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:37Z","lastTransitionTime":"2025-10-04T04:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.672340 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.672422 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.672450 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.672481 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.672504 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:37Z","lastTransitionTime":"2025-10-04T04:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.775794 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.776453 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.776528 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.776611 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.776755 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:37Z","lastTransitionTime":"2025-10-04T04:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.880286 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.880339 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.880351 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.880368 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.880379 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:37Z","lastTransitionTime":"2025-10-04T04:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.983527 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.983576 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.983587 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.983603 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:37 crc kubenswrapper[4802]: I1004 04:46:37.983614 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:37Z","lastTransitionTime":"2025-10-04T04:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.086846 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.086913 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.086927 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.086947 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.086962 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:38Z","lastTransitionTime":"2025-10-04T04:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.189812 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.189863 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.189875 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.189894 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.189909 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:38Z","lastTransitionTime":"2025-10-04T04:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.293144 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.293208 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.293225 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.293260 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.293278 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:38Z","lastTransitionTime":"2025-10-04T04:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.359224 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.359308 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.359325 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:38 crc kubenswrapper[4802]: E1004 04:46:38.359468 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:38 crc kubenswrapper[4802]: E1004 04:46:38.359767 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:38 crc kubenswrapper[4802]: E1004 04:46:38.359991 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.385994 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.396620 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.396706 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.396722 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.396749 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.396768 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:38Z","lastTransitionTime":"2025-10-04T04:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.402179 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.416791 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.436048 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.454012 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.474690 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.499468 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.499525 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.499536 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.499559 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.499572 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:38Z","lastTransitionTime":"2025-10-04T04:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.502607 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.520967 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.537403 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.554058 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.569209 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.582560 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.599407 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.602032 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.602068 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.602080 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.602100 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.602113 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:38Z","lastTransitionTime":"2025-10-04T04:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.618952 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.633570 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.651784 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.674218 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:33Z\\\",\\\"message\\\":\\\"04:46:32.794847 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1004 04:46:32.794919 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1004 04:46:32.794980 6258 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1004 04:46:32.795047 6258 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1004 04:46:32.795128 6258 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:32.795191 6258 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:32.795028 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:32.795168 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:32.795280 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:32.795292 6258 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:32.795351 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:32.795380 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:32.795428 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:32.795460 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1004 04:46:32.795510 6258 factory.go:656] Stopping watch factory\\\\nI1004 04:46:32.795537 6258 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.705417 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.705477 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.705486 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.705505 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.705520 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:38Z","lastTransitionTime":"2025-10-04T04:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.808529 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.808578 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.808589 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.808607 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.808618 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:38Z","lastTransitionTime":"2025-10-04T04:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.912187 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.912249 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.912270 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.912292 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.912308 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:38Z","lastTransitionTime":"2025-10-04T04:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.957437 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.957502 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.957519 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.957548 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.957565 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:38Z","lastTransitionTime":"2025-10-04T04:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:38 crc kubenswrapper[4802]: E1004 04:46:38.972537 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.977706 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.977757 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.977784 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.977810 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:38 crc kubenswrapper[4802]: I1004 04:46:38.977827 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:38Z","lastTransitionTime":"2025-10-04T04:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:38 crc kubenswrapper[4802]: E1004 04:46:38.996197 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:38Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.000952 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.000988 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.000998 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.001017 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.001031 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:39Z","lastTransitionTime":"2025-10-04T04:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:39 crc kubenswrapper[4802]: E1004 04:46:39.016834 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:39Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.021919 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.021982 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.021999 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.022019 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.022034 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:39Z","lastTransitionTime":"2025-10-04T04:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:39 crc kubenswrapper[4802]: E1004 04:46:39.036479 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:39Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.040826 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.040860 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.040868 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.040886 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.040897 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:39Z","lastTransitionTime":"2025-10-04T04:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:39 crc kubenswrapper[4802]: E1004 04:46:39.054975 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:39Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:39 crc kubenswrapper[4802]: E1004 04:46:39.055127 4802 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.057356 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.057418 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.057430 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.057450 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.057461 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:39Z","lastTransitionTime":"2025-10-04T04:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.160755 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.160795 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.160809 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.160828 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.160937 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:39Z","lastTransitionTime":"2025-10-04T04:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.264011 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.264071 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.264081 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.264102 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.264114 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:39Z","lastTransitionTime":"2025-10-04T04:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.358744 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:39 crc kubenswrapper[4802]: E1004 04:46:39.358904 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.366680 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.366724 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.366735 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.366751 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.366764 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:39Z","lastTransitionTime":"2025-10-04T04:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.469838 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.469907 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.469923 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.469948 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.469965 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:39Z","lastTransitionTime":"2025-10-04T04:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.572814 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.572882 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.572895 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.572915 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.572928 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:39Z","lastTransitionTime":"2025-10-04T04:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.676417 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.676482 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.676496 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.676519 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.676533 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:39Z","lastTransitionTime":"2025-10-04T04:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.780039 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.780107 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.780123 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.780151 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.780169 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:39Z","lastTransitionTime":"2025-10-04T04:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.882596 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.882637 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.882672 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.882693 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.882704 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:39Z","lastTransitionTime":"2025-10-04T04:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.985710 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.985764 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.985776 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.985796 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:39 crc kubenswrapper[4802]: I1004 04:46:39.985811 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:39Z","lastTransitionTime":"2025-10-04T04:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.089424 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.089473 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.089486 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.089504 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.089518 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:40Z","lastTransitionTime":"2025-10-04T04:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.193670 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.193727 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.193740 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.193769 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.193784 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:40Z","lastTransitionTime":"2025-10-04T04:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.296937 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.296995 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.297007 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.297026 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.297040 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:40Z","lastTransitionTime":"2025-10-04T04:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.359554 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.359763 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.359894 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:40 crc kubenswrapper[4802]: E1004 04:46:40.359820 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:40 crc kubenswrapper[4802]: E1004 04:46:40.360102 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:40 crc kubenswrapper[4802]: E1004 04:46:40.360207 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.400454 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.400516 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.400530 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.400554 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.400570 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:40Z","lastTransitionTime":"2025-10-04T04:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.503941 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.503989 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.504000 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.504020 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.504034 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:40Z","lastTransitionTime":"2025-10-04T04:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.607232 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.607289 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.607305 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.607332 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.607352 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:40Z","lastTransitionTime":"2025-10-04T04:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.710751 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.710820 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.710839 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.710867 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.710888 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:40Z","lastTransitionTime":"2025-10-04T04:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.740355 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:40 crc kubenswrapper[4802]: E1004 04:46:40.740614 4802 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:46:40 crc kubenswrapper[4802]: E1004 04:46:40.740735 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs podName:0d189ff1-3446-47fe-bcea-6b09e72a4567 nodeName:}" failed. No retries permitted until 2025-10-04 04:46:48.740708271 +0000 UTC m=+51.148709086 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs") pod "network-metrics-daemon-n27xq" (UID: "0d189ff1-3446-47fe-bcea-6b09e72a4567") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.814297 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.814354 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.814365 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.814386 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.814400 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:40Z","lastTransitionTime":"2025-10-04T04:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.918180 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.918243 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.918254 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.918274 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:40 crc kubenswrapper[4802]: I1004 04:46:40.918286 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:40Z","lastTransitionTime":"2025-10-04T04:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.021454 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.021577 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.021588 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.021605 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.021617 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:41Z","lastTransitionTime":"2025-10-04T04:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.124621 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.124710 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.124727 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.124752 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.124770 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:41Z","lastTransitionTime":"2025-10-04T04:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.228050 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.228116 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.228130 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.228152 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.228166 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:41Z","lastTransitionTime":"2025-10-04T04:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.331827 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.331918 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.331944 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.331982 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.332010 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:41Z","lastTransitionTime":"2025-10-04T04:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.359491 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:41 crc kubenswrapper[4802]: E1004 04:46:41.359731 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.436112 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.436196 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.436211 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.436234 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.436246 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:41Z","lastTransitionTime":"2025-10-04T04:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.539036 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.539084 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.539095 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.539114 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.539140 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:41Z","lastTransitionTime":"2025-10-04T04:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.642337 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.642386 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.642401 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.642423 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.642434 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:41Z","lastTransitionTime":"2025-10-04T04:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.746342 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.746407 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.746419 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.746441 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.746458 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:41Z","lastTransitionTime":"2025-10-04T04:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.849868 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.849928 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.849941 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.849961 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.849974 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:41Z","lastTransitionTime":"2025-10-04T04:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.957990 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.958063 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.958078 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.958102 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:41 crc kubenswrapper[4802]: I1004 04:46:41.958117 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:41Z","lastTransitionTime":"2025-10-04T04:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.061306 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.061356 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.061371 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.061391 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.061409 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:42Z","lastTransitionTime":"2025-10-04T04:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.164165 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.164220 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.164230 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.164253 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.164264 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:42Z","lastTransitionTime":"2025-10-04T04:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.267349 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.267406 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.267417 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.267435 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.267448 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:42Z","lastTransitionTime":"2025-10-04T04:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.359687 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.359744 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.359740 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:42 crc kubenswrapper[4802]: E1004 04:46:42.359863 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:42 crc kubenswrapper[4802]: E1004 04:46:42.359991 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:42 crc kubenswrapper[4802]: E1004 04:46:42.360090 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.370052 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.370083 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.370094 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.370110 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.370132 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:42Z","lastTransitionTime":"2025-10-04T04:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.472742 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.472791 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.472802 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.472839 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.472853 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:42Z","lastTransitionTime":"2025-10-04T04:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.576169 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.576257 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.576276 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.576306 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.576321 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:42Z","lastTransitionTime":"2025-10-04T04:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.679387 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.679448 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.679464 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.679487 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.679503 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:42Z","lastTransitionTime":"2025-10-04T04:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.782349 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.782414 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.782424 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.782443 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.782454 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:42Z","lastTransitionTime":"2025-10-04T04:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.886088 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.886149 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.886164 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.886187 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.886200 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:42Z","lastTransitionTime":"2025-10-04T04:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.990240 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.990301 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.990312 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.990335 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:42 crc kubenswrapper[4802]: I1004 04:46:42.990349 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:42Z","lastTransitionTime":"2025-10-04T04:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.093687 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.093739 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.093750 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.093768 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.093782 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:43Z","lastTransitionTime":"2025-10-04T04:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.196972 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.197040 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.197059 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.197089 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.197108 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:43Z","lastTransitionTime":"2025-10-04T04:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.300042 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.300112 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.300132 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.300159 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.300179 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:43Z","lastTransitionTime":"2025-10-04T04:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.358862 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:43 crc kubenswrapper[4802]: E1004 04:46:43.359098 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.405671 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.405730 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.405747 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.405773 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.405792 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:43Z","lastTransitionTime":"2025-10-04T04:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.508771 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.508810 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.508829 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.508853 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.508868 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:43Z","lastTransitionTime":"2025-10-04T04:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.611252 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.611303 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.611314 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.611335 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.611345 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:43Z","lastTransitionTime":"2025-10-04T04:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.715000 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.715045 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.715056 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.715072 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.715082 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:43Z","lastTransitionTime":"2025-10-04T04:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.817825 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.817867 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.817878 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.817897 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.817909 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:43Z","lastTransitionTime":"2025-10-04T04:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.921031 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.921086 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.921101 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.921124 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:43 crc kubenswrapper[4802]: I1004 04:46:43.921140 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:43Z","lastTransitionTime":"2025-10-04T04:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.023827 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.023861 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.023870 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.023885 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.023896 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:44Z","lastTransitionTime":"2025-10-04T04:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.126321 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.126381 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.126394 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.126415 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.126427 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:44Z","lastTransitionTime":"2025-10-04T04:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.230269 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.230361 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.230374 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.230398 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.230416 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:44Z","lastTransitionTime":"2025-10-04T04:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.334088 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.334147 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.334163 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.334185 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.334201 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:44Z","lastTransitionTime":"2025-10-04T04:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.359394 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.359444 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.359550 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:44 crc kubenswrapper[4802]: E1004 04:46:44.359734 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:44 crc kubenswrapper[4802]: E1004 04:46:44.359949 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:44 crc kubenswrapper[4802]: E1004 04:46:44.360361 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.360950 4802 scope.go:117] "RemoveContainer" containerID="3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.437874 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.437926 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.437936 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.437971 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.437982 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:44Z","lastTransitionTime":"2025-10-04T04:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.543290 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.543358 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.543372 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.543392 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.543427 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:44Z","lastTransitionTime":"2025-10-04T04:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.648004 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.648134 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.648155 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.648182 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.648235 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:44Z","lastTransitionTime":"2025-10-04T04:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.739394 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.742731 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979"} Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.743308 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.752438 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.752505 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.752527 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.752588 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.752614 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:44Z","lastTransitionTime":"2025-10-04T04:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.759308 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.788407 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.807199 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.828925 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:33Z\\\",\\\"message\\\":\\\"04:46:32.794847 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1004 04:46:32.794919 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1004 04:46:32.794980 6258 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1004 04:46:32.795047 6258 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1004 04:46:32.795128 6258 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:32.795191 6258 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:32.795028 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:32.795168 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:32.795280 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:32.795292 6258 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:32.795351 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:32.795380 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:32.795428 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:32.795460 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1004 04:46:32.795510 6258 factory.go:656] Stopping watch factory\\\\nI1004 04:46:32.795537 6258 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.839983 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.856053 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.858912 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.859058 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.859166 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.866418 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.866684 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:44Z","lastTransitionTime":"2025-10-04T04:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.870414 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.885293 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.899022 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.920904 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.937163 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.949373 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.962065 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.970695 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.971204 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.971841 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.972065 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.972206 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:44Z","lastTransitionTime":"2025-10-04T04:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.976340 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:44 crc kubenswrapper[4802]: I1004 04:46:44.994701 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:44Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.013071 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:45Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.028113 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:45Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.075268 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.075762 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.076075 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.076316 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.076615 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:45Z","lastTransitionTime":"2025-10-04T04:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.180524 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.180587 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.180611 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.180683 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.180723 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:45Z","lastTransitionTime":"2025-10-04T04:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.283603 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.283730 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.283747 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.283766 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.283779 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:45Z","lastTransitionTime":"2025-10-04T04:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.359808 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:45 crc kubenswrapper[4802]: E1004 04:46:45.360016 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.387688 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.387757 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.387777 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.387805 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.387824 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:45Z","lastTransitionTime":"2025-10-04T04:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.490590 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.490710 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.490724 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.490746 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.490760 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:45Z","lastTransitionTime":"2025-10-04T04:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.594402 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.594476 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.594497 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.594524 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.594547 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:45Z","lastTransitionTime":"2025-10-04T04:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.697069 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.697116 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.697132 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.697155 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.697172 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:45Z","lastTransitionTime":"2025-10-04T04:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.801505 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.801557 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.801569 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.801587 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.801599 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:45Z","lastTransitionTime":"2025-10-04T04:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.904823 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.904875 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.904893 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.904921 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:45 crc kubenswrapper[4802]: I1004 04:46:45.904940 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:45Z","lastTransitionTime":"2025-10-04T04:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.007923 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.007974 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.007988 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.008006 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.008018 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:46Z","lastTransitionTime":"2025-10-04T04:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.111664 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.112362 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.112888 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.112956 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.112978 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:46Z","lastTransitionTime":"2025-10-04T04:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.215377 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.215420 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.215431 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.215448 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.215460 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:46Z","lastTransitionTime":"2025-10-04T04:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.318487 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.318544 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.318563 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.318586 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.318607 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:46Z","lastTransitionTime":"2025-10-04T04:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.358847 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.358885 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.358937 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:46 crc kubenswrapper[4802]: E1004 04:46:46.359009 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:46 crc kubenswrapper[4802]: E1004 04:46:46.359149 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:46 crc kubenswrapper[4802]: E1004 04:46:46.359319 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.421839 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.421926 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.421940 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.421962 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.421980 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:46Z","lastTransitionTime":"2025-10-04T04:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.525104 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.525448 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.525576 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.525728 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.525840 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:46Z","lastTransitionTime":"2025-10-04T04:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.628913 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.628985 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.629003 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.629032 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.629048 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:46Z","lastTransitionTime":"2025-10-04T04:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.731842 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.731880 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.731893 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.731911 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.731924 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:46Z","lastTransitionTime":"2025-10-04T04:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.835194 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.835234 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.835263 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.835281 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.835291 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:46Z","lastTransitionTime":"2025-10-04T04:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.938401 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.938445 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.938456 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.938473 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:46 crc kubenswrapper[4802]: I1004 04:46:46.938486 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:46Z","lastTransitionTime":"2025-10-04T04:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.041236 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.041287 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.041298 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.041315 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.041328 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:47Z","lastTransitionTime":"2025-10-04T04:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.144259 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.144328 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.144341 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.144360 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.144374 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:47Z","lastTransitionTime":"2025-10-04T04:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.247096 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.247200 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.247218 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.247246 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.247267 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:47Z","lastTransitionTime":"2025-10-04T04:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.350251 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.350305 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.350317 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.350336 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.350347 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:47Z","lastTransitionTime":"2025-10-04T04:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.358771 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:47 crc kubenswrapper[4802]: E1004 04:46:47.359050 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.454602 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.454719 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.454747 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.454773 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.454790 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:47Z","lastTransitionTime":"2025-10-04T04:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.557527 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.557570 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.557579 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.557598 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.557613 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:47Z","lastTransitionTime":"2025-10-04T04:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.661281 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.661571 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.661989 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.662090 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.662167 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:47Z","lastTransitionTime":"2025-10-04T04:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.765884 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.765944 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.765961 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.765985 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.766002 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:47Z","lastTransitionTime":"2025-10-04T04:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.869411 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.869457 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.869466 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.869485 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.869496 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:47Z","lastTransitionTime":"2025-10-04T04:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.971684 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.971735 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.971745 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.971762 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:47 crc kubenswrapper[4802]: I1004 04:46:47.971773 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:47Z","lastTransitionTime":"2025-10-04T04:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.073959 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.074009 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.074023 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.074042 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.074052 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:48Z","lastTransitionTime":"2025-10-04T04:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.177854 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.177897 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.177908 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.177924 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.177936 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:48Z","lastTransitionTime":"2025-10-04T04:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.280763 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.280810 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.280822 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.280842 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.280855 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:48Z","lastTransitionTime":"2025-10-04T04:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.359770 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.359910 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:48 crc kubenswrapper[4802]: E1004 04:46:48.360027 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:48 crc kubenswrapper[4802]: E1004 04:46:48.360142 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.360241 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:48 crc kubenswrapper[4802]: E1004 04:46:48.360301 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.376431 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.383318 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.383443 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.383528 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.383606 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.383678 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:48Z","lastTransitionTime":"2025-10-04T04:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.391609 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.411988 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.425834 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.439512 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.453346 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.465989 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.486124 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.486166 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.486178 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.486197 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.486210 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:48Z","lastTransitionTime":"2025-10-04T04:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.487611 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.520201 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.543790 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.566295 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.586211 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:33Z\\\",\\\"message\\\":\\\"04:46:32.794847 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1004 04:46:32.794919 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1004 04:46:32.794980 6258 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1004 04:46:32.795047 6258 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1004 04:46:32.795128 6258 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:32.795191 6258 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:32.795028 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:32.795168 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:32.795280 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:32.795292 6258 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:32.795351 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:32.795380 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:32.795428 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:32.795460 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1004 04:46:32.795510 6258 factory.go:656] Stopping watch factory\\\\nI1004 04:46:32.795537 6258 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.588525 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.588579 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.588591 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.588609 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.588621 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:48Z","lastTransitionTime":"2025-10-04T04:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.597689 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.621362 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.638603 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.650046 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.663914 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:48Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.691899 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.691961 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.691973 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.691996 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.692010 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:48Z","lastTransitionTime":"2025-10-04T04:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.794967 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.795021 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.795033 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.795054 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.795066 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:48Z","lastTransitionTime":"2025-10-04T04:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.834470 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:48 crc kubenswrapper[4802]: E1004 04:46:48.834679 4802 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:46:48 crc kubenswrapper[4802]: E1004 04:46:48.834753 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs podName:0d189ff1-3446-47fe-bcea-6b09e72a4567 nodeName:}" failed. No retries permitted until 2025-10-04 04:47:04.834736148 +0000 UTC m=+67.242736773 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs") pod "network-metrics-daemon-n27xq" (UID: "0d189ff1-3446-47fe-bcea-6b09e72a4567") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.898094 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.898164 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.898190 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.898220 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:48 crc kubenswrapper[4802]: I1004 04:46:48.898246 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:48Z","lastTransitionTime":"2025-10-04T04:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.001391 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.001451 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.001467 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.001515 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.001533 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.105364 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.105409 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.105418 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.105436 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.105448 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.208431 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.208490 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.208503 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.208525 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.208543 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.279875 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.279923 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.279937 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.279953 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.279965 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: E1004 04:46:49.296072 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.301832 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.301896 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.301910 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.301932 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.301949 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: E1004 04:46:49.321239 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.326608 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.326681 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.326693 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.326714 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.326743 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: E1004 04:46:49.341941 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.346807 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.346862 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.346873 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.346893 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.346904 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.359569 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:49 crc kubenswrapper[4802]: E1004 04:46:49.359752 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.360841 4802 scope.go:117] "RemoveContainer" containerID="9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773" Oct 04 04:46:49 crc kubenswrapper[4802]: E1004 04:46:49.362468 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.368671 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.368726 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.368740 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.368761 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.368773 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: E1004 04:46:49.385198 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: E1004 04:46:49.385507 4802 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.387521 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.387566 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.387581 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.387602 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.387617 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.491037 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.491588 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.491602 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.491625 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.491663 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.594202 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.594238 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.594249 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.594264 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.594275 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.697623 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.697696 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.697706 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.697724 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.697737 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.764778 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/1.log" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.768275 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerStarted","Data":"6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39"} Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.768865 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.784567 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.800332 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.800494 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.800534 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.800550 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.800596 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.800612 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.816954 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.835690 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.860415 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:33Z\\\",\\\"message\\\":\\\"04:46:32.794847 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1004 04:46:32.794919 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1004 04:46:32.794980 6258 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1004 04:46:32.795047 6258 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1004 04:46:32.795128 6258 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:32.795191 6258 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:32.795028 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:32.795168 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:32.795280 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:32.795292 6258 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:32.795351 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:32.795380 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:32.795428 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:32.795460 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1004 04:46:32.795510 6258 factory.go:656] Stopping watch factory\\\\nI1004 04:46:32.795537 6258 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.874038 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.892791 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.903889 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.903941 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.903952 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.903971 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.903984 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:49Z","lastTransitionTime":"2025-10-04T04:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.946334 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.962567 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:49 crc kubenswrapper[4802]: I1004 04:46:49.991591 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:49Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.007084 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.007121 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.007129 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.007144 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.007154 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:50Z","lastTransitionTime":"2025-10-04T04:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.008787 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.021383 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.034684 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.050489 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.070220 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.088502 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.101663 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.110321 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.110380 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.110394 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.110417 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.110432 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:50Z","lastTransitionTime":"2025-10-04T04:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.149185 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.149506 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:47:22.149453207 +0000 UTC m=+84.557453842 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.213194 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.213252 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.213264 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.213284 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.213296 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:50Z","lastTransitionTime":"2025-10-04T04:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.250872 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.250952 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.250984 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.251012 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.251040 4802 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.251178 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:47:22.251151334 +0000 UTC m=+84.659151959 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.251214 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.251240 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.251254 4802 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.251262 4802 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.251316 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-04 04:47:22.251296998 +0000 UTC m=+84.659297783 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.251323 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.251366 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.251376 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:47:22.251348469 +0000 UTC m=+84.659349094 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.251383 4802 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.251474 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-04 04:47:22.251448212 +0000 UTC m=+84.659449027 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.316564 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.316834 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.316845 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.316865 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.316877 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:50Z","lastTransitionTime":"2025-10-04T04:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.359354 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.359414 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.359535 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.359743 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.359917 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.360122 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.421103 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.421152 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.421167 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.421188 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.421202 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:50Z","lastTransitionTime":"2025-10-04T04:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.524271 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.524347 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.524382 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.524412 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.524433 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:50Z","lastTransitionTime":"2025-10-04T04:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.626598 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.626658 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.626668 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.626687 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.626702 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:50Z","lastTransitionTime":"2025-10-04T04:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.729595 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.729736 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.729766 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.729801 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.729824 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:50Z","lastTransitionTime":"2025-10-04T04:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.775115 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/2.log" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.776094 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/1.log" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.779606 4802 generic.go:334] "Generic (PLEG): container finished" podID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerID="6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39" exitCode=1 Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.779678 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39"} Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.779824 4802 scope.go:117] "RemoveContainer" containerID="9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.781931 4802 scope.go:117] "RemoveContainer" containerID="6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39" Oct 04 04:46:50 crc kubenswrapper[4802]: E1004 04:46:50.782223 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.796394 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.818831 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.833110 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.833171 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.833195 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.833225 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.833250 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:50Z","lastTransitionTime":"2025-10-04T04:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.842914 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.864061 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.885843 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:33Z\\\",\\\"message\\\":\\\"04:46:32.794847 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1004 04:46:32.794919 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1004 04:46:32.794980 6258 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1004 04:46:32.795047 6258 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1004 04:46:32.795128 6258 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:32.795191 6258 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:32.795028 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:32.795168 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:32.795280 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:32.795292 6258 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:32.795351 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:32.795380 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:32.795428 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:32.795460 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1004 04:46:32.795510 6258 factory.go:656] Stopping watch factory\\\\nI1004 04:46:32.795537 6258 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"message\\\":\\\"483 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-dc98r after 0 failed attempt(s)\\\\nI1004 04:46:50.377876 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1004 04:46:50.377879 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n27xq\\\\nF1004 04:46:50.377881 6483 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:46:50.377889 6483 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd9\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.893509 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.901687 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.905801 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.919611 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.934479 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.936626 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.936687 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.936704 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.936727 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.936746 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:50Z","lastTransitionTime":"2025-10-04T04:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.951588 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.974095 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:50 crc kubenswrapper[4802]: I1004 04:46:50.991131 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.001866 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.015331 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.028427 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.039849 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.039897 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.039908 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.039929 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.039944 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:51Z","lastTransitionTime":"2025-10-04T04:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.046370 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.061750 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.074483 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.087811 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb69b353-8faa-4cd0-a50f-e38971b66edd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13725e1893b7df3529b6803f15e7981f4f036ecae6fc93c6fd930aceb911302b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91767f1c050e3ca4cb16f42ce2e057857bd56d9876897961a69e715c39587d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea44d3ab9908fd2c4f84ec1460fbb0196b33192b9b96bc91bf0ec4538e0df61a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.101799 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.115068 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.130483 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.142943 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.143000 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.143015 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.143036 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.143049 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:51Z","lastTransitionTime":"2025-10-04T04:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.150972 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e60afc4911972cfd3b246ecb297121d4ca87e9f109250eab756a04af4bd0773\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:33Z\\\",\\\"message\\\":\\\"04:46:32.794847 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1004 04:46:32.794919 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1004 04:46:32.794980 6258 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1004 04:46:32.795047 6258 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1004 04:46:32.795128 6258 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1004 04:46:32.795191 6258 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1004 04:46:32.795028 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1004 04:46:32.795168 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI1004 04:46:32.795280 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI1004 04:46:32.795292 6258 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1004 04:46:32.795351 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1004 04:46:32.795380 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1004 04:46:32.795428 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1004 04:46:32.795460 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1004 04:46:32.795510 6258 factory.go:656] Stopping watch factory\\\\nI1004 04:46:32.795537 6258 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"message\\\":\\\"483 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-dc98r after 0 failed attempt(s)\\\\nI1004 04:46:50.377876 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1004 04:46:50.377879 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n27xq\\\\nF1004 04:46:50.377881 6483 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:46:50.377889 6483 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd9\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.164364 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.177546 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.191372 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.202900 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.214928 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.234560 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.246421 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.246481 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.246496 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.246518 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.246534 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:51Z","lastTransitionTime":"2025-10-04T04:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.247274 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.261524 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.273230 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.285347 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.296389 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.307911 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.320545 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.349847 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.349895 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.349912 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.349932 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.349943 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:51Z","lastTransitionTime":"2025-10-04T04:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.359176 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:51 crc kubenswrapper[4802]: E1004 04:46:51.359327 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.452960 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.453013 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.453028 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.453048 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.453060 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:51Z","lastTransitionTime":"2025-10-04T04:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.556368 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.556416 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.556425 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.556444 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.556455 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:51Z","lastTransitionTime":"2025-10-04T04:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.659484 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.659553 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.659570 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.659601 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.659625 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:51Z","lastTransitionTime":"2025-10-04T04:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.764895 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.764954 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.764973 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.765000 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.765018 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:51Z","lastTransitionTime":"2025-10-04T04:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.786597 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/2.log" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.791632 4802 scope.go:117] "RemoveContainer" containerID="6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39" Oct 04 04:46:51 crc kubenswrapper[4802]: E1004 04:46:51.792037 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.825774 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.844421 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.859754 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.867433 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.867480 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.867492 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.867509 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.867521 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:51Z","lastTransitionTime":"2025-10-04T04:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.876119 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.892891 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.908775 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.929348 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.943416 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.961862 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.970554 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.970590 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.970600 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.970614 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.970624 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:51Z","lastTransitionTime":"2025-10-04T04:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.982393 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:51 crc kubenswrapper[4802]: I1004 04:46:51.998398 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:51Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.013981 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:52Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.032238 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb69b353-8faa-4cd0-a50f-e38971b66edd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13725e1893b7df3529b6803f15e7981f4f036ecae6fc93c6fd930aceb911302b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91767f1c050e3ca4cb16f42ce2e057857bd56d9876897961a69e715c39587d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea44d3ab9908fd2c4f84ec1460fbb0196b33192b9b96bc91bf0ec4538e0df61a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:52Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.047833 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:52Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.061886 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:52Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.073999 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.074041 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.074055 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.074074 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.074088 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:52Z","lastTransitionTime":"2025-10-04T04:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.076937 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:52Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.094499 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"message\\\":\\\"483 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-dc98r after 0 failed attempt(s)\\\\nI1004 04:46:50.377876 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1004 04:46:50.377879 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n27xq\\\\nF1004 04:46:50.377881 6483 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:46:50.377889 6483 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd9\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:52Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.109103 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:52Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.176047 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.176086 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.176095 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.176109 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.176117 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:52Z","lastTransitionTime":"2025-10-04T04:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.279710 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.279781 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.279799 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.279828 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.279844 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:52Z","lastTransitionTime":"2025-10-04T04:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.358836 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.358923 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:52 crc kubenswrapper[4802]: E1004 04:46:52.359051 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.359105 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:52 crc kubenswrapper[4802]: E1004 04:46:52.359297 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:52 crc kubenswrapper[4802]: E1004 04:46:52.359401 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.382970 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.383086 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.383112 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.383144 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.383165 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:52Z","lastTransitionTime":"2025-10-04T04:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.485625 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.485718 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.485740 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.485763 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.485783 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:52Z","lastTransitionTime":"2025-10-04T04:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.588921 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.589002 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.589023 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.589050 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.589069 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:52Z","lastTransitionTime":"2025-10-04T04:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.692665 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.692732 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.692748 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.692774 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.692790 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:52Z","lastTransitionTime":"2025-10-04T04:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.794855 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.794912 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.794927 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.794950 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.794962 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:52Z","lastTransitionTime":"2025-10-04T04:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.898223 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.898277 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.898286 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.898305 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:52 crc kubenswrapper[4802]: I1004 04:46:52.898319 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:52Z","lastTransitionTime":"2025-10-04T04:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.001346 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.001406 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.001420 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.001441 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.001455 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:53Z","lastTransitionTime":"2025-10-04T04:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.105056 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.105592 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.105844 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.106048 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.106244 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:53Z","lastTransitionTime":"2025-10-04T04:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.210116 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.210170 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.210182 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.210206 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.210232 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:53Z","lastTransitionTime":"2025-10-04T04:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.313478 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.313544 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.313566 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.313594 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.313613 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:53Z","lastTransitionTime":"2025-10-04T04:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.359355 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:53 crc kubenswrapper[4802]: E1004 04:46:53.359604 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.417266 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.417347 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.417364 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.417396 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.417417 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:53Z","lastTransitionTime":"2025-10-04T04:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.520582 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.520653 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.520671 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.520698 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.520718 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:53Z","lastTransitionTime":"2025-10-04T04:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.623933 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.623979 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.623990 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.624008 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.624019 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:53Z","lastTransitionTime":"2025-10-04T04:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.732740 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.733778 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.733947 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.734072 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.734169 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:53Z","lastTransitionTime":"2025-10-04T04:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.837883 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.838325 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.838447 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.838629 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.838762 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:53Z","lastTransitionTime":"2025-10-04T04:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.942164 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.942217 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.942228 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.942246 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:53 crc kubenswrapper[4802]: I1004 04:46:53.942257 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:53Z","lastTransitionTime":"2025-10-04T04:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.045539 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.045605 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.045620 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.045666 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.045686 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:54Z","lastTransitionTime":"2025-10-04T04:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.148820 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.149089 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.149152 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.149217 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.149272 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:54Z","lastTransitionTime":"2025-10-04T04:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.252604 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.252994 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.253077 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.253163 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.253224 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:54Z","lastTransitionTime":"2025-10-04T04:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.355827 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.355870 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.355881 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.355899 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.355909 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:54Z","lastTransitionTime":"2025-10-04T04:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.359284 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.359326 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:54 crc kubenswrapper[4802]: E1004 04:46:54.359396 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.359322 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:54 crc kubenswrapper[4802]: E1004 04:46:54.359519 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:54 crc kubenswrapper[4802]: E1004 04:46:54.359584 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.458798 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.458861 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.458882 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.458912 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.458935 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:54Z","lastTransitionTime":"2025-10-04T04:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.562120 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.562212 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.562224 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.562246 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.562262 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:54Z","lastTransitionTime":"2025-10-04T04:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.665039 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.665112 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.665131 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.665159 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.665180 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:54Z","lastTransitionTime":"2025-10-04T04:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.769181 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.769264 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.769295 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.769327 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.769349 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:54Z","lastTransitionTime":"2025-10-04T04:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.872527 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.872591 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.872611 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.872700 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.872721 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:54Z","lastTransitionTime":"2025-10-04T04:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.977041 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.977129 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.977143 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.977169 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:54 crc kubenswrapper[4802]: I1004 04:46:54.977183 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:54Z","lastTransitionTime":"2025-10-04T04:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.080569 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.080635 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.080678 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.080706 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.080725 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:55Z","lastTransitionTime":"2025-10-04T04:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.184201 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.184380 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.184448 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.184512 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.184532 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:55Z","lastTransitionTime":"2025-10-04T04:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.288262 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.288370 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.288386 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.288409 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.288422 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:55Z","lastTransitionTime":"2025-10-04T04:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.359505 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:55 crc kubenswrapper[4802]: E1004 04:46:55.359743 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.392261 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.392316 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.392329 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.392357 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.392386 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:55Z","lastTransitionTime":"2025-10-04T04:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.495719 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.495768 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.495787 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.495812 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.495827 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:55Z","lastTransitionTime":"2025-10-04T04:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.598608 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.598693 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.598710 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.598738 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.598758 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:55Z","lastTransitionTime":"2025-10-04T04:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.702737 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.702791 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.702802 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.702821 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.702833 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:55Z","lastTransitionTime":"2025-10-04T04:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.805232 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.805300 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.805318 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.805342 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.805365 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:55Z","lastTransitionTime":"2025-10-04T04:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.908495 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.908581 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.908602 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.908631 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:55 crc kubenswrapper[4802]: I1004 04:46:55.908688 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:55Z","lastTransitionTime":"2025-10-04T04:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.011063 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.011126 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.011134 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.011153 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.011166 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:56Z","lastTransitionTime":"2025-10-04T04:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.113987 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.114045 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.114054 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.114073 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.114086 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:56Z","lastTransitionTime":"2025-10-04T04:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.216552 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.216611 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.216620 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.216635 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.216665 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:56Z","lastTransitionTime":"2025-10-04T04:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.319576 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.319653 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.319665 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.319682 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.319693 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:56Z","lastTransitionTime":"2025-10-04T04:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.359741 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.359748 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:56 crc kubenswrapper[4802]: E1004 04:46:56.359958 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.359748 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:56 crc kubenswrapper[4802]: E1004 04:46:56.360075 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:56 crc kubenswrapper[4802]: E1004 04:46:56.360180 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.422976 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.423029 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.423041 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.423059 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.423069 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:56Z","lastTransitionTime":"2025-10-04T04:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.526315 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.526362 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.526371 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.526387 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.526398 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:56Z","lastTransitionTime":"2025-10-04T04:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.629840 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.629877 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.629886 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.629903 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.629914 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:56Z","lastTransitionTime":"2025-10-04T04:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.733383 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.733461 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.733479 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.733512 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.733533 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:56Z","lastTransitionTime":"2025-10-04T04:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.836270 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.836328 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.836343 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.836361 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.836374 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:56Z","lastTransitionTime":"2025-10-04T04:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.939316 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.939388 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.939405 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.939433 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:56 crc kubenswrapper[4802]: I1004 04:46:56.939455 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:56Z","lastTransitionTime":"2025-10-04T04:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.042796 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.042893 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.042921 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.042950 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.042970 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:57Z","lastTransitionTime":"2025-10-04T04:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.146000 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.146064 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.146076 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.146096 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.146109 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:57Z","lastTransitionTime":"2025-10-04T04:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.249748 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.249822 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.249851 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.249885 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.249909 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:57Z","lastTransitionTime":"2025-10-04T04:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.353832 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.353896 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.353915 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.353942 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.353962 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:57Z","lastTransitionTime":"2025-10-04T04:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.359074 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:57 crc kubenswrapper[4802]: E1004 04:46:57.359228 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.456939 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.456991 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.457003 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.457022 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.457036 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:57Z","lastTransitionTime":"2025-10-04T04:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.559917 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.559978 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.559988 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.560006 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.560019 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:57Z","lastTransitionTime":"2025-10-04T04:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.662532 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.662572 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.662581 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.662598 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.662608 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:57Z","lastTransitionTime":"2025-10-04T04:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.765942 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.766030 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.766051 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.766081 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.766102 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:57Z","lastTransitionTime":"2025-10-04T04:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.869521 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.869574 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.869591 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.869615 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.869634 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:57Z","lastTransitionTime":"2025-10-04T04:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.972627 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.972680 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.972691 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.972709 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:57 crc kubenswrapper[4802]: I1004 04:46:57.972720 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:57Z","lastTransitionTime":"2025-10-04T04:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.114100 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.114143 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.114154 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.114175 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.114186 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:58Z","lastTransitionTime":"2025-10-04T04:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.217701 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.217831 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.217856 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.217921 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.217935 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:58Z","lastTransitionTime":"2025-10-04T04:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.321447 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.321507 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.321518 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.321542 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.321557 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:58Z","lastTransitionTime":"2025-10-04T04:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.358878 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.358902 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:46:58 crc kubenswrapper[4802]: E1004 04:46:58.359153 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.358913 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:46:58 crc kubenswrapper[4802]: E1004 04:46:58.359233 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:46:58 crc kubenswrapper[4802]: E1004 04:46:58.359316 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.376005 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.395588 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.411386 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.423340 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.424496 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.424583 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.424623 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.424685 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.424710 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:58Z","lastTransitionTime":"2025-10-04T04:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.439220 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.459234 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"message\\\":\\\"483 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-dc98r after 0 failed attempt(s)\\\\nI1004 04:46:50.377876 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1004 04:46:50.377879 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n27xq\\\\nF1004 04:46:50.377881 6483 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:46:50.377889 6483 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd9\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.474660 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.495703 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.509671 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb69b353-8faa-4cd0-a50f-e38971b66edd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13725e1893b7df3529b6803f15e7981f4f036ecae6fc93c6fd930aceb911302b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91767f1c050e3ca4cb16f42ce2e057857bd56d9876897961a69e715c39587d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea44d3ab9908fd2c4f84ec1460fbb0196b33192b9b96bc91bf0ec4538e0df61a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.523226 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.528034 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.528084 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.528096 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.528116 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.528129 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:58Z","lastTransitionTime":"2025-10-04T04:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.536114 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.567744 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.583096 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.594235 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.608768 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.612686 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.630339 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.630404 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.630429 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.630460 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.630483 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:58Z","lastTransitionTime":"2025-10-04T04:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.636425 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.652082 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.668514 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.683286 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.703570 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.714970 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.730756 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.733975 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.734022 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.734032 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.734050 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.734059 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:58Z","lastTransitionTime":"2025-10-04T04:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.744887 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.760385 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.774117 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.785247 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb69b353-8faa-4cd0-a50f-e38971b66edd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13725e1893b7df3529b6803f15e7981f4f036ecae6fc93c6fd930aceb911302b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91767f1c050e3ca4cb16f42ce2e057857bd56d9876897961a69e715c39587d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea44d3ab9908fd2c4f84ec1460fbb0196b33192b9b96bc91bf0ec4538e0df61a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.798775 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.809625 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.822423 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.837388 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.837466 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.837489 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.837523 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.837550 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:58Z","lastTransitionTime":"2025-10-04T04:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.842834 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"message\\\":\\\"483 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-dc98r after 0 failed attempt(s)\\\\nI1004 04:46:50.377876 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1004 04:46:50.377879 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n27xq\\\\nF1004 04:46:50.377881 6483 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:46:50.377889 6483 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd9\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.855043 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.868487 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.884189 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.896033 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.909166 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.930215 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:58Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.940240 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.940289 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.940301 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.940323 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:58 crc kubenswrapper[4802]: I1004 04:46:58.940337 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:58Z","lastTransitionTime":"2025-10-04T04:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.043530 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.043574 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.043586 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.043606 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.043619 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.147879 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.147953 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.147973 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.148002 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.148026 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.251621 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.251701 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.251711 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.251730 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.251745 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.354816 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.354899 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.354924 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.354959 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.354982 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.359472 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:46:59 crc kubenswrapper[4802]: E1004 04:46:59.359752 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.457297 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.457360 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.457375 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.457399 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.457414 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.560900 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.560977 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.560995 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.561020 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.561039 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.664610 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.664681 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.664692 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.664714 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.664728 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.699631 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.699707 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.699719 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.699740 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.699763 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: E1004 04:46:59.714015 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:59Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.718421 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.718455 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.718465 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.718481 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.718492 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: E1004 04:46:59.736687 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:59Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.741849 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.741924 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.741935 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.741954 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.741964 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: E1004 04:46:59.759955 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:59Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.766367 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.766441 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.766458 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.766485 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.766534 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: E1004 04:46:59.781545 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:59Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.786346 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.786380 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.786388 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.786405 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.786417 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: E1004 04:46:59.804624 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:59Z is after 2025-08-24T17:21:41Z" Oct 04 04:46:59 crc kubenswrapper[4802]: E1004 04:46:59.804800 4802 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.807960 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.808040 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.808060 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.808090 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.808110 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.911301 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.911356 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.911366 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.911382 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:46:59 crc kubenswrapper[4802]: I1004 04:46:59.911393 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:46:59Z","lastTransitionTime":"2025-10-04T04:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.015548 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.015621 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.015633 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.015682 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.015701 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:00Z","lastTransitionTime":"2025-10-04T04:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.119624 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.119736 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.119754 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.119777 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.119796 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:00Z","lastTransitionTime":"2025-10-04T04:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.223388 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.223441 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.223454 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.223476 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.223489 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:00Z","lastTransitionTime":"2025-10-04T04:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.326552 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.326602 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.326615 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.326758 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.326778 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:00Z","lastTransitionTime":"2025-10-04T04:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.359527 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.359553 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.359599 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:00 crc kubenswrapper[4802]: E1004 04:47:00.359768 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:00 crc kubenswrapper[4802]: E1004 04:47:00.359914 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:00 crc kubenswrapper[4802]: E1004 04:47:00.360112 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.429878 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.429934 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.429946 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.429970 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.429984 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:00Z","lastTransitionTime":"2025-10-04T04:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.532797 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.532854 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.532871 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.532902 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.532919 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:00Z","lastTransitionTime":"2025-10-04T04:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.638381 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.638455 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.638467 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.638490 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.638510 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:00Z","lastTransitionTime":"2025-10-04T04:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.741949 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.742006 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.742023 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.742042 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.742053 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:00Z","lastTransitionTime":"2025-10-04T04:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.843991 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.844035 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.844045 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.844061 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.844077 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:00Z","lastTransitionTime":"2025-10-04T04:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.947045 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.947094 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.947106 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.947123 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:00 crc kubenswrapper[4802]: I1004 04:47:00.947136 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:00Z","lastTransitionTime":"2025-10-04T04:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.049416 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.049454 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.049468 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.049487 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.049498 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:01Z","lastTransitionTime":"2025-10-04T04:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.152728 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.152771 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.152784 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.152801 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.152812 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:01Z","lastTransitionTime":"2025-10-04T04:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.256201 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.256271 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.256290 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.256314 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.256333 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:01Z","lastTransitionTime":"2025-10-04T04:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.359029 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.359495 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.359532 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.359542 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.359559 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.359571 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:01Z","lastTransitionTime":"2025-10-04T04:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:01 crc kubenswrapper[4802]: E1004 04:47:01.360089 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.463485 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.463544 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.463560 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.463578 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.463589 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:01Z","lastTransitionTime":"2025-10-04T04:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.566321 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.566362 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.566373 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.566389 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.566400 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:01Z","lastTransitionTime":"2025-10-04T04:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.669599 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.669671 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.669681 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.669698 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.669709 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:01Z","lastTransitionTime":"2025-10-04T04:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.773150 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.773204 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.773215 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.773234 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.773246 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:01Z","lastTransitionTime":"2025-10-04T04:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.876555 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.876618 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.876633 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.876674 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.876687 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:01Z","lastTransitionTime":"2025-10-04T04:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.980779 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.980819 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.980829 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.980847 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:01 crc kubenswrapper[4802]: I1004 04:47:01.980860 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:01Z","lastTransitionTime":"2025-10-04T04:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.083974 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.084019 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.084028 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.084043 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.084055 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:02Z","lastTransitionTime":"2025-10-04T04:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.186951 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.187382 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.187463 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.187541 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.187607 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:02Z","lastTransitionTime":"2025-10-04T04:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.290435 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.291174 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.291441 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.292129 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.292337 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:02Z","lastTransitionTime":"2025-10-04T04:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.358970 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.358974 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:02 crc kubenswrapper[4802]: E1004 04:47:02.359479 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:02 crc kubenswrapper[4802]: E1004 04:47:02.359479 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.358994 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:02 crc kubenswrapper[4802]: E1004 04:47:02.359574 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.395540 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.395587 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.395599 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.395624 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.395655 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:02Z","lastTransitionTime":"2025-10-04T04:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.499239 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.499282 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.499294 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.499312 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.499327 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:02Z","lastTransitionTime":"2025-10-04T04:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.602515 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.602567 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.602580 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.602600 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.602612 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:02Z","lastTransitionTime":"2025-10-04T04:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.705348 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.705946 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.706054 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.706162 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.706242 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:02Z","lastTransitionTime":"2025-10-04T04:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.809245 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.809278 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.809289 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.809307 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.809318 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:02Z","lastTransitionTime":"2025-10-04T04:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.912893 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.913002 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.913025 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.913062 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:02 crc kubenswrapper[4802]: I1004 04:47:02.913084 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:02Z","lastTransitionTime":"2025-10-04T04:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.015979 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.016457 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.016713 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.017349 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.017616 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:03Z","lastTransitionTime":"2025-10-04T04:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.120941 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.121018 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.121038 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.121068 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.121088 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:03Z","lastTransitionTime":"2025-10-04T04:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.224998 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.225060 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.225077 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.225102 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.225117 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:03Z","lastTransitionTime":"2025-10-04T04:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.328235 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.328279 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.328352 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.328416 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.328434 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:03Z","lastTransitionTime":"2025-10-04T04:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.358842 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:03 crc kubenswrapper[4802]: E1004 04:47:03.359364 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.359718 4802 scope.go:117] "RemoveContainer" containerID="6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39" Oct 04 04:47:03 crc kubenswrapper[4802]: E1004 04:47:03.359917 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.431440 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.431855 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.431949 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.432033 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.432128 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:03Z","lastTransitionTime":"2025-10-04T04:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.534822 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.534886 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.534900 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.534919 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.534932 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:03Z","lastTransitionTime":"2025-10-04T04:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.638202 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.638243 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.638254 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.638271 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.638283 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:03Z","lastTransitionTime":"2025-10-04T04:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.741359 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.741423 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.741437 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.741459 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.741474 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:03Z","lastTransitionTime":"2025-10-04T04:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.843434 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.843808 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.843878 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.843945 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.844001 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:03Z","lastTransitionTime":"2025-10-04T04:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.946541 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.946890 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.946971 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.947063 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:03 crc kubenswrapper[4802]: I1004 04:47:03.947158 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:03Z","lastTransitionTime":"2025-10-04T04:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.050824 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.050894 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.050917 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.050947 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.050978 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:04Z","lastTransitionTime":"2025-10-04T04:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.154364 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.154446 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.154461 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.154484 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.154498 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:04Z","lastTransitionTime":"2025-10-04T04:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.257974 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.258028 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.258038 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.258059 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.258076 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:04Z","lastTransitionTime":"2025-10-04T04:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.359436 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:04 crc kubenswrapper[4802]: E1004 04:47:04.359598 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.359827 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:04 crc kubenswrapper[4802]: E1004 04:47:04.360434 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.362134 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:04 crc kubenswrapper[4802]: E1004 04:47:04.362553 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.366077 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.366228 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.366268 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.366305 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.366344 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:04Z","lastTransitionTime":"2025-10-04T04:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.470833 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.470885 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.470903 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.470930 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.470948 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:04Z","lastTransitionTime":"2025-10-04T04:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.574428 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.574462 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.574472 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.574488 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.574499 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:04Z","lastTransitionTime":"2025-10-04T04:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.677864 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.677918 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.677929 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.677950 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.677964 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:04Z","lastTransitionTime":"2025-10-04T04:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.780131 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.780166 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.780175 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.780189 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.780199 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:04Z","lastTransitionTime":"2025-10-04T04:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.883334 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.883387 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.883400 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.883422 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.883437 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:04Z","lastTransitionTime":"2025-10-04T04:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.918274 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:04 crc kubenswrapper[4802]: E1004 04:47:04.918497 4802 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:47:04 crc kubenswrapper[4802]: E1004 04:47:04.918634 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs podName:0d189ff1-3446-47fe-bcea-6b09e72a4567 nodeName:}" failed. No retries permitted until 2025-10-04 04:47:36.918603804 +0000 UTC m=+99.326604559 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs") pod "network-metrics-daemon-n27xq" (UID: "0d189ff1-3446-47fe-bcea-6b09e72a4567") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.986039 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.986095 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.986105 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.986121 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:04 crc kubenswrapper[4802]: I1004 04:47:04.986131 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:04Z","lastTransitionTime":"2025-10-04T04:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.089087 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.089155 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.089166 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.089185 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.089196 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:05Z","lastTransitionTime":"2025-10-04T04:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.192589 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.192661 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.192673 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.192693 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.192704 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:05Z","lastTransitionTime":"2025-10-04T04:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.295710 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.295780 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.295799 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.295830 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.295851 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:05Z","lastTransitionTime":"2025-10-04T04:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.359535 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:05 crc kubenswrapper[4802]: E1004 04:47:05.359721 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.398249 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.398294 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.398312 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.398330 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.398340 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:05Z","lastTransitionTime":"2025-10-04T04:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.501854 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.501903 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.501915 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.501933 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.501945 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:05Z","lastTransitionTime":"2025-10-04T04:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.605025 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.605080 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.605089 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.605107 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.605120 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:05Z","lastTransitionTime":"2025-10-04T04:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.709024 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.709094 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.709109 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.709134 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.709152 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:05Z","lastTransitionTime":"2025-10-04T04:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.812410 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.812451 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.812464 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.812501 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.812513 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:05Z","lastTransitionTime":"2025-10-04T04:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.915598 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.915667 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.915685 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.915706 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:05 crc kubenswrapper[4802]: I1004 04:47:05.915722 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:05Z","lastTransitionTime":"2025-10-04T04:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.018534 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.018574 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.018602 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.018622 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.018678 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:06Z","lastTransitionTime":"2025-10-04T04:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.123036 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.123103 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.123120 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.123144 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.123161 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:06Z","lastTransitionTime":"2025-10-04T04:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.225851 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.225906 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.225921 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.225945 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.225959 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:06Z","lastTransitionTime":"2025-10-04T04:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.328562 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.328603 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.328612 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.328629 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.328658 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:06Z","lastTransitionTime":"2025-10-04T04:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.359719 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.359737 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.359748 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:06 crc kubenswrapper[4802]: E1004 04:47:06.359869 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:06 crc kubenswrapper[4802]: E1004 04:47:06.360044 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:06 crc kubenswrapper[4802]: E1004 04:47:06.360132 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.431713 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.431750 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.431761 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.431780 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.431794 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:06Z","lastTransitionTime":"2025-10-04T04:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.534305 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.534345 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.534356 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.534376 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.534394 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:06Z","lastTransitionTime":"2025-10-04T04:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.637274 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.637330 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.637343 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.637363 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.637377 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:06Z","lastTransitionTime":"2025-10-04T04:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.745848 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.745972 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.745999 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.746031 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.746056 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:06Z","lastTransitionTime":"2025-10-04T04:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.849549 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.849587 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.849595 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.849612 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.849624 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:06Z","lastTransitionTime":"2025-10-04T04:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.952794 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.952843 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.952854 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.952874 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:06 crc kubenswrapper[4802]: I1004 04:47:06.952886 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:06Z","lastTransitionTime":"2025-10-04T04:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.055546 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.055598 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.055607 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.055627 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.055661 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:07Z","lastTransitionTime":"2025-10-04T04:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.158422 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.158478 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.158491 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.158516 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.158531 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:07Z","lastTransitionTime":"2025-10-04T04:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.263227 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.263285 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.263299 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.263325 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.263337 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:07Z","lastTransitionTime":"2025-10-04T04:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.359667 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:07 crc kubenswrapper[4802]: E1004 04:47:07.359872 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.366587 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.366677 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.366695 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.366716 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.366731 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:07Z","lastTransitionTime":"2025-10-04T04:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.469881 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.469930 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.469941 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.469961 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.469975 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:07Z","lastTransitionTime":"2025-10-04T04:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.572744 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.572786 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.572799 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.572858 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.572874 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:07Z","lastTransitionTime":"2025-10-04T04:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.693940 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.694003 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.694075 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.694095 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.694106 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:07Z","lastTransitionTime":"2025-10-04T04:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.796926 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.796995 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.797007 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.797029 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.797043 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:07Z","lastTransitionTime":"2025-10-04T04:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.899579 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.899631 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.899666 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.899685 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:07 crc kubenswrapper[4802]: I1004 04:47:07.899699 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:07Z","lastTransitionTime":"2025-10-04T04:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.002378 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.002430 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.002441 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.002459 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.002471 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:08Z","lastTransitionTime":"2025-10-04T04:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.105368 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.105438 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.105450 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.105490 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.105503 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:08Z","lastTransitionTime":"2025-10-04T04:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.209727 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.209796 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.209814 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.209841 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.209861 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:08Z","lastTransitionTime":"2025-10-04T04:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.313798 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.313894 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.313913 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.313972 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.313993 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:08Z","lastTransitionTime":"2025-10-04T04:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.358927 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.359060 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.359070 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:08 crc kubenswrapper[4802]: E1004 04:47:08.359256 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:08 crc kubenswrapper[4802]: E1004 04:47:08.359319 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:08 crc kubenswrapper[4802]: E1004 04:47:08.359163 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.382370 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.403131 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.416843 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.416911 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.416925 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.416947 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.416961 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:08Z","lastTransitionTime":"2025-10-04T04:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.421364 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.434697 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.447465 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.460180 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb69b353-8faa-4cd0-a50f-e38971b66edd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13725e1893b7df3529b6803f15e7981f4f036ecae6fc93c6fd930aceb911302b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91767f1c050e3ca4cb16f42ce2e057857bd56d9876897961a69e715c39587d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea44d3ab9908fd2c4f84ec1460fbb0196b33192b9b96bc91bf0ec4538e0df61a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.475493 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.493307 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.509101 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.522120 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.522274 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.522313 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.522450 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.523265 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:08Z","lastTransitionTime":"2025-10-04T04:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.532435 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"message\\\":\\\"483 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-dc98r after 0 failed attempt(s)\\\\nI1004 04:46:50.377876 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1004 04:46:50.377879 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n27xq\\\\nF1004 04:46:50.377881 6483 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:46:50.377889 6483 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd9\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.552876 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.569179 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.581856 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.595380 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.610710 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.623723 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.625955 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.626003 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.626019 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.626042 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.626058 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:08Z","lastTransitionTime":"2025-10-04T04:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.647557 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.662426 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:08Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.729384 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.729433 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.729449 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.729472 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.729492 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:08Z","lastTransitionTime":"2025-10-04T04:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.832808 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.832869 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.832882 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.832904 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.832919 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:08Z","lastTransitionTime":"2025-10-04T04:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.936347 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.936422 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.936446 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.936473 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:08 crc kubenswrapper[4802]: I1004 04:47:08.936493 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:08Z","lastTransitionTime":"2025-10-04T04:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.039628 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.039703 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.039717 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.039740 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.039758 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:09Z","lastTransitionTime":"2025-10-04T04:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.142754 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.142825 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.142843 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.142871 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.142889 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:09Z","lastTransitionTime":"2025-10-04T04:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.246557 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.246705 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.246739 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.246771 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.246794 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:09Z","lastTransitionTime":"2025-10-04T04:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.350490 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.350776 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.350794 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.350825 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.350846 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:09Z","lastTransitionTime":"2025-10-04T04:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.359117 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:09 crc kubenswrapper[4802]: E1004 04:47:09.359233 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.454300 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.454366 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.454388 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.454413 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.454430 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:09Z","lastTransitionTime":"2025-10-04T04:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.557496 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.557555 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.557565 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.557584 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.557597 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:09Z","lastTransitionTime":"2025-10-04T04:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.661314 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.661378 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.661392 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.661415 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.661436 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:09Z","lastTransitionTime":"2025-10-04T04:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.764590 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.764682 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.764702 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.764720 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.764733 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:09Z","lastTransitionTime":"2025-10-04T04:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.863286 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jpj5_c1c56664-b32b-475a-89eb-55910da58338/kube-multus/0.log" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.863382 4802 generic.go:334] "Generic (PLEG): container finished" podID="c1c56664-b32b-475a-89eb-55910da58338" containerID="8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c" exitCode=1 Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.863439 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jpj5" event={"ID":"c1c56664-b32b-475a-89eb-55910da58338","Type":"ContainerDied","Data":"8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c"} Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.864179 4802 scope.go:117] "RemoveContainer" containerID="8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.867342 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.867383 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.867398 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.867418 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.867435 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:09Z","lastTransitionTime":"2025-10-04T04:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.887130 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"message\\\":\\\"483 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-dc98r after 0 failed attempt(s)\\\\nI1004 04:46:50.377876 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1004 04:46:50.377879 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n27xq\\\\nF1004 04:46:50.377881 6483 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:46:50.377889 6483 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd9\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:09Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.901062 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:09Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.914212 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:09Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.932326 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb69b353-8faa-4cd0-a50f-e38971b66edd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13725e1893b7df3529b6803f15e7981f4f036ecae6fc93c6fd930aceb911302b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91767f1c050e3ca4cb16f42ce2e057857bd56d9876897961a69e715c39587d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea44d3ab9908fd2c4f84ec1460fbb0196b33192b9b96bc91bf0ec4538e0df61a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:09Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.946009 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:09Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.962485 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:09Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.969304 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.969328 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.969338 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.969353 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.969366 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:09Z","lastTransitionTime":"2025-10-04T04:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.973771 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.973807 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.973820 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.973838 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.973850 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:09Z","lastTransitionTime":"2025-10-04T04:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.977557 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"2025-10-04T04:46:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3\\\\n2025-10-04T04:46:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3 to /host/opt/cni/bin/\\\\n2025-10-04T04:46:24Z [verbose] multus-daemon started\\\\n2025-10-04T04:46:24Z [verbose] Readiness Indicator file check\\\\n2025-10-04T04:47:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:09Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:09 crc kubenswrapper[4802]: E1004 04:47:09.986986 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:09Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.990599 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.990677 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.990693 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.990715 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:09 crc kubenswrapper[4802]: I1004 04:47:09.990744 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:09Z","lastTransitionTime":"2025-10-04T04:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:10 crc kubenswrapper[4802]: E1004 04:47:10.003384 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.008668 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.010220 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.010276 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.010294 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.010316 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.010330 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:10Z","lastTransitionTime":"2025-10-04T04:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.026839 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: E1004 04:47:10.037807 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.042804 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.042907 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.042928 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.042954 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.043012 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:10Z","lastTransitionTime":"2025-10-04T04:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.051036 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: E1004 04:47:10.056784 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.060704 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.060756 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.060769 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.060788 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.060799 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:10Z","lastTransitionTime":"2025-10-04T04:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.064383 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: E1004 04:47:10.071710 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: E1004 04:47:10.071859 4802 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.073684 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.073719 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.073729 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.073746 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.073759 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:10Z","lastTransitionTime":"2025-10-04T04:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.075890 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.087437 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.100489 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.113113 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.123095 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.135594 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.147441 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.176217 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.176272 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.176284 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.176300 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.176310 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:10Z","lastTransitionTime":"2025-10-04T04:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.279465 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.279529 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.279545 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.279573 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.279917 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:10Z","lastTransitionTime":"2025-10-04T04:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.359072 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.359115 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.359144 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:10 crc kubenswrapper[4802]: E1004 04:47:10.359279 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:10 crc kubenswrapper[4802]: E1004 04:47:10.359424 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:10 crc kubenswrapper[4802]: E1004 04:47:10.359691 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.383134 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.383174 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.383185 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.383204 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.383218 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:10Z","lastTransitionTime":"2025-10-04T04:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.492363 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.492441 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.492453 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.492471 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.492484 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:10Z","lastTransitionTime":"2025-10-04T04:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.595532 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.595572 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.595588 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.595607 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.595619 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:10Z","lastTransitionTime":"2025-10-04T04:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.699162 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.699204 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.699213 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.699231 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.699242 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:10Z","lastTransitionTime":"2025-10-04T04:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.802269 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.802312 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.802322 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.802337 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.802348 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:10Z","lastTransitionTime":"2025-10-04T04:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.870004 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jpj5_c1c56664-b32b-475a-89eb-55910da58338/kube-multus/0.log" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.870077 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jpj5" event={"ID":"c1c56664-b32b-475a-89eb-55910da58338","Type":"ContainerStarted","Data":"703d0cda61f01b5ea4b0c1fa1dd68ddcd65f8bc00b22022bbb3a8c77a11425a2"} Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.888857 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.904092 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.905384 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.905417 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.905426 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.905442 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.905456 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:10Z","lastTransitionTime":"2025-10-04T04:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.919420 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.941201 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.955729 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.972148 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:10 crc kubenswrapper[4802]: I1004 04:47:10.986033 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:10Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.022861 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.022907 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.022918 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.022939 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.022952 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:11Z","lastTransitionTime":"2025-10-04T04:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.025521 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:11Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.043108 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:11Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.063786 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:11Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.082393 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:11Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.095388 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb69b353-8faa-4cd0-a50f-e38971b66edd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13725e1893b7df3529b6803f15e7981f4f036ecae6fc93c6fd930aceb911302b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91767f1c050e3ca4cb16f42ce2e057857bd56d9876897961a69e715c39587d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea44d3ab9908fd2c4f84ec1460fbb0196b33192b9b96bc91bf0ec4538e0df61a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:11Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.108929 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:11Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.122382 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:11Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.125091 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.125163 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.125173 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.125194 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.125204 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:11Z","lastTransitionTime":"2025-10-04T04:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.137808 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703d0cda61f01b5ea4b0c1fa1dd68ddcd65f8bc00b22022bbb3a8c77a11425a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"2025-10-04T04:46:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3\\\\n2025-10-04T04:46:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3 to /host/opt/cni/bin/\\\\n2025-10-04T04:46:24Z [verbose] multus-daemon started\\\\n2025-10-04T04:46:24Z [verbose] Readiness Indicator file check\\\\n2025-10-04T04:47:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:11Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.158101 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"message\\\":\\\"483 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-dc98r after 0 failed attempt(s)\\\\nI1004 04:46:50.377876 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1004 04:46:50.377879 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n27xq\\\\nF1004 04:46:50.377881 6483 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:46:50.377889 6483 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd9\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:11Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.170153 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:11Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.184019 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:11Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.228388 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.228459 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.228471 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.228495 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.228526 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:11Z","lastTransitionTime":"2025-10-04T04:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.331614 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.331698 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.331716 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.331734 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.331749 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:11Z","lastTransitionTime":"2025-10-04T04:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.359184 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:11 crc kubenswrapper[4802]: E1004 04:47:11.359311 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.433747 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.433795 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.433810 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.433832 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.433846 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:11Z","lastTransitionTime":"2025-10-04T04:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.536203 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.536234 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.536242 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.536257 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.536269 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:11Z","lastTransitionTime":"2025-10-04T04:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.639634 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.639712 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.639727 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.639750 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.639765 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:11Z","lastTransitionTime":"2025-10-04T04:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.743482 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.743565 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.743581 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.743604 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.743620 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:11Z","lastTransitionTime":"2025-10-04T04:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.846751 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.846833 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.846852 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.846887 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.846908 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:11Z","lastTransitionTime":"2025-10-04T04:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.950537 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.950591 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.950601 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.950621 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:11 crc kubenswrapper[4802]: I1004 04:47:11.950636 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:11Z","lastTransitionTime":"2025-10-04T04:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.058709 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.058766 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.058777 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.058796 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.058808 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:12Z","lastTransitionTime":"2025-10-04T04:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.161672 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.161707 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.161718 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.161736 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.161748 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:12Z","lastTransitionTime":"2025-10-04T04:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.265952 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.266024 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.266049 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.266079 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.266120 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:12Z","lastTransitionTime":"2025-10-04T04:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.359408 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:12 crc kubenswrapper[4802]: E1004 04:47:12.359604 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.359436 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:12 crc kubenswrapper[4802]: E1004 04:47:12.359754 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.359411 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:12 crc kubenswrapper[4802]: E1004 04:47:12.359856 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.368469 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.368521 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.368536 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.368552 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.368564 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:12Z","lastTransitionTime":"2025-10-04T04:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.471406 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.471450 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.471465 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.471486 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.471500 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:12Z","lastTransitionTime":"2025-10-04T04:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.574119 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.574164 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.574173 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.574189 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.574201 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:12Z","lastTransitionTime":"2025-10-04T04:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.677041 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.677121 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.677144 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.677177 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.677202 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:12Z","lastTransitionTime":"2025-10-04T04:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.781335 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.781376 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.781390 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.781433 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.781449 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:12Z","lastTransitionTime":"2025-10-04T04:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.885004 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.885085 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.885113 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.885148 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.885178 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:12Z","lastTransitionTime":"2025-10-04T04:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.988869 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.988932 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.988953 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.988981 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:12 crc kubenswrapper[4802]: I1004 04:47:12.988998 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:12Z","lastTransitionTime":"2025-10-04T04:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.091246 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.091299 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.091310 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.091331 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.091342 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:13Z","lastTransitionTime":"2025-10-04T04:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.195120 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.195172 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.195181 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.195198 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.195211 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:13Z","lastTransitionTime":"2025-10-04T04:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.298415 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.298447 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.298460 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.298496 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.298511 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:13Z","lastTransitionTime":"2025-10-04T04:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.358948 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:13 crc kubenswrapper[4802]: E1004 04:47:13.359128 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.401562 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.401609 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.401618 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.401635 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.401659 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:13Z","lastTransitionTime":"2025-10-04T04:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.504775 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.504837 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.504850 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.504874 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.504889 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:13Z","lastTransitionTime":"2025-10-04T04:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.607880 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.608252 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.608351 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.608456 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.608566 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:13Z","lastTransitionTime":"2025-10-04T04:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.712347 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.712474 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.712491 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.712518 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.712543 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:13Z","lastTransitionTime":"2025-10-04T04:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.815221 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.815269 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.815284 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.815302 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.815316 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:13Z","lastTransitionTime":"2025-10-04T04:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.918825 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.918869 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.918884 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.918906 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:13 crc kubenswrapper[4802]: I1004 04:47:13.918920 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:13Z","lastTransitionTime":"2025-10-04T04:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.021272 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.021776 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.021982 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.022202 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.022398 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:14Z","lastTransitionTime":"2025-10-04T04:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.124537 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.124579 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.124589 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.124606 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.124618 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:14Z","lastTransitionTime":"2025-10-04T04:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.227090 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.227151 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.227161 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.227180 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.227193 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:14Z","lastTransitionTime":"2025-10-04T04:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.336284 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.336366 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.336381 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.336401 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.336415 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:14Z","lastTransitionTime":"2025-10-04T04:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.358894 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.358948 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:14 crc kubenswrapper[4802]: E1004 04:47:14.359248 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:14 crc kubenswrapper[4802]: E1004 04:47:14.359290 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.358968 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:14 crc kubenswrapper[4802]: E1004 04:47:14.359663 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.439816 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.439869 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.439882 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.439903 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.439920 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:14Z","lastTransitionTime":"2025-10-04T04:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.543602 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.543678 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.543694 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.543718 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.543731 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:14Z","lastTransitionTime":"2025-10-04T04:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.647251 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.647324 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.647342 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.647369 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.647392 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:14Z","lastTransitionTime":"2025-10-04T04:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.750683 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.750737 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.750750 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.750769 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.750784 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:14Z","lastTransitionTime":"2025-10-04T04:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.854444 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.854504 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.854518 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.854541 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.854564 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:14Z","lastTransitionTime":"2025-10-04T04:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.958471 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.958519 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.958530 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.958548 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:14 crc kubenswrapper[4802]: I1004 04:47:14.958560 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:14Z","lastTransitionTime":"2025-10-04T04:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.062426 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.062478 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.062511 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.062758 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.062772 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:15Z","lastTransitionTime":"2025-10-04T04:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.165599 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.165678 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.165695 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.165715 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.165732 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:15Z","lastTransitionTime":"2025-10-04T04:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.269305 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.269361 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.269372 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.269427 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.269442 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:15Z","lastTransitionTime":"2025-10-04T04:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.359018 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:15 crc kubenswrapper[4802]: E1004 04:47:15.359238 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.372853 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.372962 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.372992 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.373018 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.373034 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:15Z","lastTransitionTime":"2025-10-04T04:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.476110 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.476166 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.476178 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.476197 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.476207 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:15Z","lastTransitionTime":"2025-10-04T04:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.578818 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.578869 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.578882 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.578899 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.578912 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:15Z","lastTransitionTime":"2025-10-04T04:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.683183 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.683249 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.683262 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.683285 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.683301 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:15Z","lastTransitionTime":"2025-10-04T04:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.786355 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.786406 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.786419 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.786440 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.786451 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:15Z","lastTransitionTime":"2025-10-04T04:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.890081 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.890139 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.890152 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.890172 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.890186 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:15Z","lastTransitionTime":"2025-10-04T04:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.994049 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.994092 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.994103 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.994120 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:15 crc kubenswrapper[4802]: I1004 04:47:15.994133 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:15Z","lastTransitionTime":"2025-10-04T04:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.096966 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.097312 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.097422 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.097525 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.097614 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:16Z","lastTransitionTime":"2025-10-04T04:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.200890 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.200940 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.200949 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.200969 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.200979 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:16Z","lastTransitionTime":"2025-10-04T04:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.303941 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.303986 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.303999 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.304016 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.304027 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:16Z","lastTransitionTime":"2025-10-04T04:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.359385 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.359549 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.359386 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:16 crc kubenswrapper[4802]: E1004 04:47:16.359714 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:16 crc kubenswrapper[4802]: E1004 04:47:16.359868 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:16 crc kubenswrapper[4802]: E1004 04:47:16.360008 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.406771 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.406803 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.406812 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.406835 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.406856 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:16Z","lastTransitionTime":"2025-10-04T04:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.509319 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.509365 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.509376 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.509390 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.509402 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:16Z","lastTransitionTime":"2025-10-04T04:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.617792 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.618047 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.618060 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.618078 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.618090 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:16Z","lastTransitionTime":"2025-10-04T04:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.721458 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.721497 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.721506 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.721522 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.721531 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:16Z","lastTransitionTime":"2025-10-04T04:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.824203 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.824249 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.824258 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.824275 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.824285 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:16Z","lastTransitionTime":"2025-10-04T04:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.926980 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.927024 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.927035 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.927053 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:16 crc kubenswrapper[4802]: I1004 04:47:16.927064 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:16Z","lastTransitionTime":"2025-10-04T04:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.029767 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.029808 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.029817 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.029833 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.029845 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:17Z","lastTransitionTime":"2025-10-04T04:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.133485 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.133516 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.133525 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.133539 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.133551 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:17Z","lastTransitionTime":"2025-10-04T04:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.237867 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.237908 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.237917 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.237934 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.237945 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:17Z","lastTransitionTime":"2025-10-04T04:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.341484 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.341869 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.341957 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.342076 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.342157 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:17Z","lastTransitionTime":"2025-10-04T04:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.358857 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:17 crc kubenswrapper[4802]: E1004 04:47:17.359094 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.444765 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.444811 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.444821 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.444841 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.444854 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:17Z","lastTransitionTime":"2025-10-04T04:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.547706 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.547748 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.547759 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.547781 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.547793 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:17Z","lastTransitionTime":"2025-10-04T04:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.651471 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.652099 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.652202 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.652322 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.652424 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:17Z","lastTransitionTime":"2025-10-04T04:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.755574 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.755616 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.755626 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.755653 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.755664 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:17Z","lastTransitionTime":"2025-10-04T04:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.857979 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.858046 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.858057 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.858080 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.858094 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:17Z","lastTransitionTime":"2025-10-04T04:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.961525 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.961566 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.961583 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.961608 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:17 crc kubenswrapper[4802]: I1004 04:47:17.961624 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:17Z","lastTransitionTime":"2025-10-04T04:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.064904 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.064960 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.064971 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.064986 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.064997 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:18Z","lastTransitionTime":"2025-10-04T04:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.167972 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.168022 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.168031 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.168054 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.168064 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:18Z","lastTransitionTime":"2025-10-04T04:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.271207 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.271242 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.271251 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.271266 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.271277 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:18Z","lastTransitionTime":"2025-10-04T04:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.359428 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.359486 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.359537 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:18 crc kubenswrapper[4802]: E1004 04:47:18.359593 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:18 crc kubenswrapper[4802]: E1004 04:47:18.359711 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:18 crc kubenswrapper[4802]: E1004 04:47:18.360137 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.360357 4802 scope.go:117] "RemoveContainer" containerID="6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.374238 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.374450 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.374584 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.374681 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.374753 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:18Z","lastTransitionTime":"2025-10-04T04:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.379022 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.396233 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.420438 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.437497 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703d0cda61f01b5ea4b0c1fa1dd68ddcd65f8bc00b22022bbb3a8c77a11425a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"2025-10-04T04:46:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3\\\\n2025-10-04T04:46:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3 to /host/opt/cni/bin/\\\\n2025-10-04T04:46:24Z [verbose] multus-daemon started\\\\n2025-10-04T04:46:24Z [verbose] Readiness Indicator file check\\\\n2025-10-04T04:47:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.460515 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"message\\\":\\\"483 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-dc98r after 0 failed attempt(s)\\\\nI1004 04:46:50.377876 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1004 04:46:50.377879 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n27xq\\\\nF1004 04:46:50.377881 6483 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:46:50.377889 6483 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd9\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.472861 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.477697 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.477748 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.477760 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.477778 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.477793 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:18Z","lastTransitionTime":"2025-10-04T04:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.488796 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.502576 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb69b353-8faa-4cd0-a50f-e38971b66edd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13725e1893b7df3529b6803f15e7981f4f036ecae6fc93c6fd930aceb911302b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91767f1c050e3ca4cb16f42ce2e057857bd56d9876897961a69e715c39587d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea44d3ab9908fd2c4f84ec1460fbb0196b33192b9b96bc91bf0ec4538e0df61a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.516093 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.531480 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.553935 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.572081 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.581423 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.581484 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.581496 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.581515 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.581528 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:18Z","lastTransitionTime":"2025-10-04T04:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.585231 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.598622 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.612603 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.624707 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.639326 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.653779 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.684194 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.684236 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.684247 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.684263 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.684273 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:18Z","lastTransitionTime":"2025-10-04T04:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.786492 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.786522 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.786531 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.786546 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.786556 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:18Z","lastTransitionTime":"2025-10-04T04:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.889892 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.889932 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.889943 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.889960 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.889972 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:18Z","lastTransitionTime":"2025-10-04T04:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.904307 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/2.log" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.906706 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerStarted","Data":"29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c"} Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.908013 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.924005 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.943386 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.959605 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.975434 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.993375 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.993416 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.993425 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.993445 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.993457 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:18Z","lastTransitionTime":"2025-10-04T04:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:18 crc kubenswrapper[4802]: I1004 04:47:18.997889 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb69b353-8faa-4cd0-a50f-e38971b66edd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13725e1893b7df3529b6803f15e7981f4f036ecae6fc93c6fd930aceb911302b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91767f1c050e3ca4cb16f42ce2e057857bd56d9876897961a69e715c39587d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea44d3ab9908fd2c4f84ec1460fbb0196b33192b9b96bc91bf0ec4538e0df61a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:18Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.020874 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.037737 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.052407 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703d0cda61f01b5ea4b0c1fa1dd68ddcd65f8bc00b22022bbb3a8c77a11425a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"2025-10-04T04:46:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3\\\\n2025-10-04T04:46:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3 to /host/opt/cni/bin/\\\\n2025-10-04T04:46:24Z [verbose] multus-daemon started\\\\n2025-10-04T04:46:24Z [verbose] Readiness Indicator file check\\\\n2025-10-04T04:47:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.075990 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"message\\\":\\\"483 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-dc98r after 0 failed attempt(s)\\\\nI1004 04:46:50.377876 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1004 04:46:50.377879 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n27xq\\\\nF1004 04:46:50.377881 6483 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:46:50.377889 6483 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd9\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.088759 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.096193 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.096237 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.096245 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.096263 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.096275 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:19Z","lastTransitionTime":"2025-10-04T04:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.108915 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.124253 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.139397 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.158282 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.176921 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.194417 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.199692 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.199762 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.199776 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.199797 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.199809 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:19Z","lastTransitionTime":"2025-10-04T04:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.214382 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.229168 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.302602 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.302668 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.302680 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.302697 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.302708 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:19Z","lastTransitionTime":"2025-10-04T04:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.358864 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:19 crc kubenswrapper[4802]: E1004 04:47:19.359060 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.405545 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.405601 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.405611 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.405630 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.405658 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:19Z","lastTransitionTime":"2025-10-04T04:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.508879 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.508927 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.508941 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.508961 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.508972 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:19Z","lastTransitionTime":"2025-10-04T04:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.612181 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.612223 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.612236 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.612253 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.612265 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:19Z","lastTransitionTime":"2025-10-04T04:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.715081 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.715129 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.715138 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.715154 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.715165 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:19Z","lastTransitionTime":"2025-10-04T04:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.818132 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.818209 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.818229 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.818254 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.818272 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:19Z","lastTransitionTime":"2025-10-04T04:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.921002 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.921063 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.921083 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.921107 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:19 crc kubenswrapper[4802]: I1004 04:47:19.921126 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:19Z","lastTransitionTime":"2025-10-04T04:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.023775 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.023844 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.023865 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.023890 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.023908 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.126764 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.126831 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.126849 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.126875 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.126893 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.158726 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.159135 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.159156 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.159178 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.159197 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: E1004 04:47:20.175905 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:20Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.181052 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.181111 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.181121 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.181141 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.181154 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: E1004 04:47:20.195627 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:20Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.200527 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.200591 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.200607 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.200630 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.200669 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: E1004 04:47:20.216514 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:20Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.222393 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.222469 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.222485 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.222508 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.222527 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: E1004 04:47:20.237425 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:20Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.241673 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.241728 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.241738 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.241754 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.241768 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: E1004 04:47:20.256516 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:20Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:20 crc kubenswrapper[4802]: E1004 04:47:20.256630 4802 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.258496 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.258555 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.258571 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.258595 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.258610 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.359217 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:20 crc kubenswrapper[4802]: E1004 04:47:20.359589 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.359559 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:20 crc kubenswrapper[4802]: E1004 04:47:20.359735 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.359821 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:20 crc kubenswrapper[4802]: E1004 04:47:20.360172 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.362491 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.362554 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.362570 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.362604 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.362617 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.467253 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.467294 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.467302 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.467319 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.467331 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.572072 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.572137 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.572154 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.572175 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.572186 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.676251 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.676337 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.676355 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.676385 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.676411 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.779783 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.779865 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.779884 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.779910 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.779930 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.883578 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.883629 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.883665 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.883690 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.883703 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.916806 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/3.log" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.917720 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/2.log" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.921068 4802 generic.go:334] "Generic (PLEG): container finished" podID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerID="29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c" exitCode=1 Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.921114 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c"} Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.921163 4802 scope.go:117] "RemoveContainer" containerID="6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.922574 4802 scope.go:117] "RemoveContainer" containerID="29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c" Oct 04 04:47:20 crc kubenswrapper[4802]: E1004 04:47:20.922928 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.942501 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:20Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.955155 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:20Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.974958 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:20Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.987461 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.987527 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.987545 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.987571 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.987590 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:20Z","lastTransitionTime":"2025-10-04T04:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:20 crc kubenswrapper[4802]: I1004 04:47:20.989973 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:20Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.006368 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.021310 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.036833 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.048202 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.071241 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.090810 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb69b353-8faa-4cd0-a50f-e38971b66edd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13725e1893b7df3529b6803f15e7981f4f036ecae6fc93c6fd930aceb911302b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91767f1c050e3ca4cb16f42ce2e057857bd56d9876897961a69e715c39587d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea44d3ab9908fd2c4f84ec1460fbb0196b33192b9b96bc91bf0ec4538e0df61a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.091228 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.091281 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.091296 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.091323 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.091340 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:21Z","lastTransitionTime":"2025-10-04T04:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.107373 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.120697 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.136868 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703d0cda61f01b5ea4b0c1fa1dd68ddcd65f8bc00b22022bbb3a8c77a11425a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"2025-10-04T04:46:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3\\\\n2025-10-04T04:46:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3 to /host/opt/cni/bin/\\\\n2025-10-04T04:46:24Z [verbose] multus-daemon started\\\\n2025-10-04T04:46:24Z [verbose] Readiness Indicator file check\\\\n2025-10-04T04:47:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.158954 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"message\\\":\\\"483 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-dc98r after 0 failed attempt(s)\\\\nI1004 04:46:50.377876 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1004 04:46:50.377879 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n27xq\\\\nF1004 04:46:50.377881 6483 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:46:50.377889 6483 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd9\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"ork_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF1004 04:47:19.379071 6857 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:47:19.379086 6857 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1004 04:47:19.3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.191279 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.193396 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.193449 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.193463 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.193483 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.193497 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:21Z","lastTransitionTime":"2025-10-04T04:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.212955 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.226403 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.240142 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:21Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.296463 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.296517 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.296530 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.296553 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.296566 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:21Z","lastTransitionTime":"2025-10-04T04:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.358867 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:21 crc kubenswrapper[4802]: E1004 04:47:21.359092 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.399774 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.399832 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.399845 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.399867 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.399883 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:21Z","lastTransitionTime":"2025-10-04T04:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.502481 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.502581 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.502598 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.502625 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.502662 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:21Z","lastTransitionTime":"2025-10-04T04:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.606211 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.606268 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.606284 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.606325 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.606343 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:21Z","lastTransitionTime":"2025-10-04T04:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.709465 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.709537 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.709557 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.709584 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.709604 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:21Z","lastTransitionTime":"2025-10-04T04:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.813062 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.813116 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.813129 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.813145 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.813155 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:21Z","lastTransitionTime":"2025-10-04T04:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.915488 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.915535 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.915544 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.915561 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:21 crc kubenswrapper[4802]: I1004 04:47:21.915576 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:21Z","lastTransitionTime":"2025-10-04T04:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.024617 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.024927 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.024975 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.025010 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.025026 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:22Z","lastTransitionTime":"2025-10-04T04:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.128779 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.128871 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.128883 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.128904 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.128920 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:22Z","lastTransitionTime":"2025-10-04T04:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.223371 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.223583 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.223546983 +0000 UTC m=+148.631547638 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.233068 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.233253 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.233276 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.233301 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.233322 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:22Z","lastTransitionTime":"2025-10-04T04:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.325387 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.325481 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.325527 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.325577 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.325622 4802 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.325820 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.325791075 +0000 UTC m=+148.733791700 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.325931 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.325989 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.325988 4802 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.326113 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.326085603 +0000 UTC m=+148.734086228 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.326022 4802 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.326219 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.326187706 +0000 UTC m=+148.734188371 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.326311 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.326345 4802 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.326369 4802 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.326431 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.326415742 +0000 UTC m=+148.734416407 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.336299 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.336351 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.336362 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.336383 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.336394 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:22Z","lastTransitionTime":"2025-10-04T04:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.359288 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.359456 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.359751 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.359892 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.360109 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:22 crc kubenswrapper[4802]: E1004 04:47:22.360240 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.439317 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.439380 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.439399 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.439427 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.439446 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:22Z","lastTransitionTime":"2025-10-04T04:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.543187 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.543251 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.543261 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.543282 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.543292 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:22Z","lastTransitionTime":"2025-10-04T04:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.645963 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.646020 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.646029 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.646049 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.646063 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:22Z","lastTransitionTime":"2025-10-04T04:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.749011 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.749074 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.749096 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.749125 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.749147 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:22Z","lastTransitionTime":"2025-10-04T04:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.852538 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.852601 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.852615 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.852636 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.852674 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:22Z","lastTransitionTime":"2025-10-04T04:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.930333 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/3.log" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.955240 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.955298 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.955310 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.955330 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:22 crc kubenswrapper[4802]: I1004 04:47:22.955343 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:22Z","lastTransitionTime":"2025-10-04T04:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.058734 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.058794 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.058805 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.058822 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.058839 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:23Z","lastTransitionTime":"2025-10-04T04:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.162332 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.162390 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.162401 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.162422 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.162435 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:23Z","lastTransitionTime":"2025-10-04T04:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.265435 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.265473 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.265482 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.265499 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.265509 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:23Z","lastTransitionTime":"2025-10-04T04:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.358898 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:23 crc kubenswrapper[4802]: E1004 04:47:23.359145 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.368094 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.368156 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.368210 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.368233 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.368250 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:23Z","lastTransitionTime":"2025-10-04T04:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.471283 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.471883 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.471993 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.472059 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.472170 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:23Z","lastTransitionTime":"2025-10-04T04:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.575619 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.575734 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.575756 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.575790 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.575814 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:23Z","lastTransitionTime":"2025-10-04T04:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.678112 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.678157 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.678167 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.678185 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.678198 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:23Z","lastTransitionTime":"2025-10-04T04:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.780298 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.780352 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.780365 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.780389 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.780406 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:23Z","lastTransitionTime":"2025-10-04T04:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.883090 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.883134 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.883143 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.883182 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.883195 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:23Z","lastTransitionTime":"2025-10-04T04:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.985561 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.985611 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.985621 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.985657 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:23 crc kubenswrapper[4802]: I1004 04:47:23.985691 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:23Z","lastTransitionTime":"2025-10-04T04:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.089067 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.089122 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.089131 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.089152 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.089169 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:24Z","lastTransitionTime":"2025-10-04T04:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.192053 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.192092 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.192102 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.192120 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.192130 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:24Z","lastTransitionTime":"2025-10-04T04:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.295146 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.295201 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.295211 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.295229 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.295247 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:24Z","lastTransitionTime":"2025-10-04T04:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.359609 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.359737 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:24 crc kubenswrapper[4802]: E1004 04:47:24.359804 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:24 crc kubenswrapper[4802]: E1004 04:47:24.359971 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.359994 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:24 crc kubenswrapper[4802]: E1004 04:47:24.360265 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.398666 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.398938 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.399006 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.399089 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.399206 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:24Z","lastTransitionTime":"2025-10-04T04:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.501945 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.501999 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.502016 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.502042 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.502061 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:24Z","lastTransitionTime":"2025-10-04T04:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.605352 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.605463 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.605804 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.605847 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.605863 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:24Z","lastTransitionTime":"2025-10-04T04:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.708286 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.708336 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.708347 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.708366 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.708379 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:24Z","lastTransitionTime":"2025-10-04T04:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.811439 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.811522 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.811538 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.811563 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.811576 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:24Z","lastTransitionTime":"2025-10-04T04:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.914424 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.914820 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.914920 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.915028 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:24 crc kubenswrapper[4802]: I1004 04:47:24.915101 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:24Z","lastTransitionTime":"2025-10-04T04:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.018215 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.018279 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.018291 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.018316 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.018330 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:25Z","lastTransitionTime":"2025-10-04T04:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.120437 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.120474 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.120482 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.120517 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.120527 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:25Z","lastTransitionTime":"2025-10-04T04:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.223282 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.223325 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.223340 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.223356 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.223366 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:25Z","lastTransitionTime":"2025-10-04T04:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.326675 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.326720 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.326729 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.326747 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.326758 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:25Z","lastTransitionTime":"2025-10-04T04:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.359408 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:25 crc kubenswrapper[4802]: E1004 04:47:25.359694 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.430089 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.430155 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.430172 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.430196 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.430215 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:25Z","lastTransitionTime":"2025-10-04T04:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.533920 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.534003 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.534023 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.534056 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.534076 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:25Z","lastTransitionTime":"2025-10-04T04:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.636876 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.636945 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.636966 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.636991 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.637009 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:25Z","lastTransitionTime":"2025-10-04T04:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.739540 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.739584 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.739594 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.739616 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.739654 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:25Z","lastTransitionTime":"2025-10-04T04:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.842623 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.842950 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.843055 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.843135 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.843198 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:25Z","lastTransitionTime":"2025-10-04T04:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.946208 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.946289 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.946302 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.946325 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:25 crc kubenswrapper[4802]: I1004 04:47:25.946343 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:25Z","lastTransitionTime":"2025-10-04T04:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.049484 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.049553 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.049573 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.049602 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.049621 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:26Z","lastTransitionTime":"2025-10-04T04:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.152887 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.152971 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.152989 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.153017 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.153034 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:26Z","lastTransitionTime":"2025-10-04T04:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.255785 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.255839 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.255850 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.255868 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.255886 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:26Z","lastTransitionTime":"2025-10-04T04:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.359123 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.359183 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:26 crc kubenswrapper[4802]: E1004 04:47:26.359259 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:26 crc kubenswrapper[4802]: E1004 04:47:26.359443 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.359481 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.359503 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.359513 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.359528 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.359541 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:26Z","lastTransitionTime":"2025-10-04T04:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.359717 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:26 crc kubenswrapper[4802]: E1004 04:47:26.359807 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.461657 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.461694 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.461703 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.461746 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.461763 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:26Z","lastTransitionTime":"2025-10-04T04:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.564605 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.564663 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.564673 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.564690 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.564700 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:26Z","lastTransitionTime":"2025-10-04T04:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.667845 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.668104 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.668186 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.668259 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.668328 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:26Z","lastTransitionTime":"2025-10-04T04:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.771341 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.771399 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.771418 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.771450 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.771471 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:26Z","lastTransitionTime":"2025-10-04T04:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.874770 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.874841 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.874859 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.875256 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.875303 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:26Z","lastTransitionTime":"2025-10-04T04:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.978560 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.978909 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.978918 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.978935 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:26 crc kubenswrapper[4802]: I1004 04:47:26.978947 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:26Z","lastTransitionTime":"2025-10-04T04:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.081686 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.081724 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.081732 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.081747 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.081758 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:27Z","lastTransitionTime":"2025-10-04T04:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.185197 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.185245 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.185257 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.185278 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.185291 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:27Z","lastTransitionTime":"2025-10-04T04:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.288393 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.288434 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.288445 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.288463 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.288473 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:27Z","lastTransitionTime":"2025-10-04T04:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.359221 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:27 crc kubenswrapper[4802]: E1004 04:47:27.359393 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.392332 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.392594 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.392712 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.392842 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.392923 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:27Z","lastTransitionTime":"2025-10-04T04:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.496127 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.496200 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.496212 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.496232 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.496243 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:27Z","lastTransitionTime":"2025-10-04T04:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.599534 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.599611 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.599627 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.599671 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.599687 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:27Z","lastTransitionTime":"2025-10-04T04:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.702674 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.702710 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.702722 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.702740 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.702752 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:27Z","lastTransitionTime":"2025-10-04T04:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.806151 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.806212 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.806230 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.806258 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.806277 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:27Z","lastTransitionTime":"2025-10-04T04:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.909591 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.909659 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.909669 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.909687 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:27 crc kubenswrapper[4802]: I1004 04:47:27.909698 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:27Z","lastTransitionTime":"2025-10-04T04:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.013700 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.013758 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.013773 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.013790 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.013802 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:28Z","lastTransitionTime":"2025-10-04T04:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.117141 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.117209 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.117230 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.117256 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.117274 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:28Z","lastTransitionTime":"2025-10-04T04:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.220066 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.220116 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.220126 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.220145 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.220161 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:28Z","lastTransitionTime":"2025-10-04T04:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.323566 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.323623 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.323656 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.323680 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.323698 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:28Z","lastTransitionTime":"2025-10-04T04:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.359290 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.359347 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.359290 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:28 crc kubenswrapper[4802]: E1004 04:47:28.359539 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:28 crc kubenswrapper[4802]: E1004 04:47:28.359692 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:28 crc kubenswrapper[4802]: E1004 04:47:28.359792 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.385507 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.410070 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.427995 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.428170 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.428242 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.428509 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.428540 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:28Z","lastTransitionTime":"2025-10-04T04:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.430184 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.449037 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.468301 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.489397 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.509354 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.521858 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.532160 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.532224 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.532239 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.532262 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.532278 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:28Z","lastTransitionTime":"2025-10-04T04:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.540876 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.559488 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.579575 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.597702 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.612384 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb69b353-8faa-4cd0-a50f-e38971b66edd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13725e1893b7df3529b6803f15e7981f4f036ecae6fc93c6fd930aceb911302b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91767f1c050e3ca4cb16f42ce2e057857bd56d9876897961a69e715c39587d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea44d3ab9908fd2c4f84ec1460fbb0196b33192b9b96bc91bf0ec4538e0df61a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.628156 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.635491 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.635835 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.635928 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.636028 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.636123 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:28Z","lastTransitionTime":"2025-10-04T04:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.645459 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.658613 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703d0cda61f01b5ea4b0c1fa1dd68ddcd65f8bc00b22022bbb3a8c77a11425a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"2025-10-04T04:46:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3\\\\n2025-10-04T04:46:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3 to /host/opt/cni/bin/\\\\n2025-10-04T04:46:24Z [verbose] multus-daemon started\\\\n2025-10-04T04:46:24Z [verbose] Readiness Indicator file check\\\\n2025-10-04T04:47:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.677502 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e62659a026358d17ba45fad7520b34308d438cdde6648a96e387598ddd4ed39\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"message\\\":\\\"483 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-dc98r after 0 failed attempt(s)\\\\nI1004 04:46:50.377876 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1004 04:46:50.377879 6483 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n27xq\\\\nF1004 04:46:50.377881 6483 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:46:50Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:46:50.377889 6483 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd9\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"ork_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF1004 04:47:19.379071 6857 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:47:19.379086 6857 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1004 04:47:19.3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:47:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.687875 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:28Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.743685 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.743988 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.744111 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.744229 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.744321 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:28Z","lastTransitionTime":"2025-10-04T04:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.847770 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.847830 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.847842 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.847865 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.847881 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:28Z","lastTransitionTime":"2025-10-04T04:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.952046 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.952586 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.952908 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.953101 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:28 crc kubenswrapper[4802]: I1004 04:47:28.953267 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:28Z","lastTransitionTime":"2025-10-04T04:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.056495 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.056557 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.056576 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.056603 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.056624 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:29Z","lastTransitionTime":"2025-10-04T04:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.160929 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.160997 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.161016 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.161048 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.161067 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:29Z","lastTransitionTime":"2025-10-04T04:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.263759 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.263840 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.263862 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.263899 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.263942 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:29Z","lastTransitionTime":"2025-10-04T04:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.359058 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:29 crc kubenswrapper[4802]: E1004 04:47:29.359207 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.366913 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.367077 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.367173 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.367245 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.367331 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:29Z","lastTransitionTime":"2025-10-04T04:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.470357 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.470628 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.470784 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.470861 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.470949 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:29Z","lastTransitionTime":"2025-10-04T04:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.574497 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.574556 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.574568 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.574591 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.574603 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:29Z","lastTransitionTime":"2025-10-04T04:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.676942 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.676987 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.676995 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.677012 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.677022 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:29Z","lastTransitionTime":"2025-10-04T04:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.779684 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.779760 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.779777 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.779805 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.779828 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:29Z","lastTransitionTime":"2025-10-04T04:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.883128 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.883205 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.883232 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.883267 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.883295 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:29Z","lastTransitionTime":"2025-10-04T04:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.986141 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.986186 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.986197 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.986216 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:29 crc kubenswrapper[4802]: I1004 04:47:29.986228 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:29Z","lastTransitionTime":"2025-10-04T04:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.090210 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.090280 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.090297 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.090323 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.090341 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.193409 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.193465 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.193474 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.193493 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.193504 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.296500 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.296557 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.296568 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.296591 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.296604 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.336586 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.336664 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.336683 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.336708 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.336719 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: E1004 04:47:30.350191 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.358684 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.358736 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.358750 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.358773 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.358783 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.358828 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.358878 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.358786 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: E1004 04:47:30.358940 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:30 crc kubenswrapper[4802]: E1004 04:47:30.359121 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:30 crc kubenswrapper[4802]: E1004 04:47:30.359238 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:30 crc kubenswrapper[4802]: E1004 04:47:30.373713 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.378397 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.378530 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.378595 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.378716 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.378821 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: E1004 04:47:30.392522 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.398361 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.398420 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.398433 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.398458 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.398473 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: E1004 04:47:30.413545 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.417701 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.417766 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.417778 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.417801 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.417816 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: E1004 04:47:30.430409 4802 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c452f803-794d-4c12-9ed0-ead681c77619\\\",\\\"systemUUID\\\":\\\"827fad0a-2530-4d29-b9e6-eca7ec571a16\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:30Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:30 crc kubenswrapper[4802]: E1004 04:47:30.430532 4802 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.432447 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.432485 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.432495 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.432511 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.432522 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.536011 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.536069 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.536081 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.536099 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.536114 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.638511 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.638574 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.638586 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.638606 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.638616 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.741135 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.741224 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.741240 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.741262 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.741278 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.844563 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.844730 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.844751 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.844780 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.844799 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.947991 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.948049 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.948060 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.948082 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:30 crc kubenswrapper[4802]: I1004 04:47:30.948095 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:30Z","lastTransitionTime":"2025-10-04T04:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.051985 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.052055 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.052068 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.052087 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.052099 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:31Z","lastTransitionTime":"2025-10-04T04:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.154153 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.154191 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.154202 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.154217 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.154228 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:31Z","lastTransitionTime":"2025-10-04T04:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.257235 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.257568 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.257707 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.257807 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.257884 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:31Z","lastTransitionTime":"2025-10-04T04:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.359116 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:31 crc kubenswrapper[4802]: E1004 04:47:31.359399 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.361830 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.361885 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.361908 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.361939 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.361962 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:31Z","lastTransitionTime":"2025-10-04T04:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.465257 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.465299 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.465309 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.465328 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.465340 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:31Z","lastTransitionTime":"2025-10-04T04:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.567900 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.567964 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.567991 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.568017 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.568034 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:31Z","lastTransitionTime":"2025-10-04T04:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.671169 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.671203 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.671211 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.671227 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.671236 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:31Z","lastTransitionTime":"2025-10-04T04:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.774258 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.774328 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.774342 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.774365 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.774382 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:31Z","lastTransitionTime":"2025-10-04T04:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.878353 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.878445 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.878470 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.878510 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.878536 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:31Z","lastTransitionTime":"2025-10-04T04:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.981518 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.981572 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.981581 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.981605 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:31 crc kubenswrapper[4802]: I1004 04:47:31.981617 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:31Z","lastTransitionTime":"2025-10-04T04:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.084775 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.084837 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.084849 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.084871 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.084889 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:32Z","lastTransitionTime":"2025-10-04T04:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.189949 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.190002 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.190034 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.190061 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.190083 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:32Z","lastTransitionTime":"2025-10-04T04:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.293433 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.293495 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.293512 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.293543 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.293566 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:32Z","lastTransitionTime":"2025-10-04T04:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.359603 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.359739 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:32 crc kubenswrapper[4802]: E1004 04:47:32.359856 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.360025 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:32 crc kubenswrapper[4802]: E1004 04:47:32.360364 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:32 crc kubenswrapper[4802]: E1004 04:47:32.360468 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.375146 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.396757 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.396822 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.396841 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.396866 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.396888 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:32Z","lastTransitionTime":"2025-10-04T04:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.500621 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.500701 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.500717 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.500740 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.500757 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:32Z","lastTransitionTime":"2025-10-04T04:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.604453 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.604509 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.604529 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.604556 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.604575 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:32Z","lastTransitionTime":"2025-10-04T04:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.707131 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.707208 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.707231 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.707259 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.707280 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:32Z","lastTransitionTime":"2025-10-04T04:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.810606 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.810680 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.810692 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.810710 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.810723 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:32Z","lastTransitionTime":"2025-10-04T04:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.913879 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.914239 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.914330 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.914413 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:32 crc kubenswrapper[4802]: I1004 04:47:32.914491 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:32Z","lastTransitionTime":"2025-10-04T04:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.018152 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.018239 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.018265 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.018300 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.018325 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:33Z","lastTransitionTime":"2025-10-04T04:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.123578 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.123678 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.123696 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.123724 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.123743 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:33Z","lastTransitionTime":"2025-10-04T04:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.227223 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.227674 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.227794 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.227895 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.227977 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:33Z","lastTransitionTime":"2025-10-04T04:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.331069 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.331110 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.331122 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.331139 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.331151 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:33Z","lastTransitionTime":"2025-10-04T04:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.359466 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:33 crc kubenswrapper[4802]: E1004 04:47:33.359980 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.360149 4802 scope.go:117] "RemoveContainer" containerID="29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c" Oct 04 04:47:33 crc kubenswrapper[4802]: E1004 04:47:33.360347 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.374754 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3bdc18-4f4c-4405-b065-53e026640bca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09f6c101f943f749a7ab23fdcc689dfccd6a2bf61239c14e40640bd7a52cb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://595bda9c56531562dfdf1dcb7e688683cb3aa1ddc1e129639b8775a0ea5d4d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://595bda9c56531562dfdf1dcb7e688683cb3aa1ddc1e129639b8775a0ea5d4d83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.391348 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.407549 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d19e6fbed054e17d60dc1d57c4c41a7f60bc986eadd71c2d81271336d9724606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b12da91213c48cb2837925ea359c2973252e81c4a58712678600661bd467f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.424182 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.434303 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.434371 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.434385 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.434432 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.434454 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:33Z","lastTransitionTime":"2025-10-04T04:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.440810 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5237e1a-7a1d-426f-bc60-ebafd26fe7ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb2c322aa98eaaf9c8847fc4195aedbd6eb6cb13a3bed4055172f92d38c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8d55cd9fb43a674664a12b98d5200a4cd21c65bb5bb805340dcb2aaa2d87cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72561632e89ccfae8f5912de06ab72ab9d78278f91e18586c551f998ad276ccd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f60fb5d482a54a995c65f127fd2639a14e7c5e89fed03a25215ee2e554b2890\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.455408 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb69b353-8faa-4cd0-a50f-e38971b66edd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13725e1893b7df3529b6803f15e7981f4f036ecae6fc93c6fd930aceb911302b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91767f1c050e3ca4cb16f42ce2e057857bd56d9876897961a69e715c39587d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea44d3ab9908fd2c4f84ec1460fbb0196b33192b9b96bc91bf0ec4538e0df61a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9529521223a64a7a5df86c146ee85d61d4ec2213d5958888e1b1d3625a5beb35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.472309 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.487093 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"611d63c9-e554-40be-aab2-f2ca43f6827b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6f025a3871080ca902731e0e4b5e5c40abac61e80bd990b681cebf42afc75b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szkqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dc98r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.504868 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jpj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1c56664-b32b-475a-89eb-55910da58338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703d0cda61f01b5ea4b0c1fa1dd68ddcd65f8bc00b22022bbb3a8c77a11425a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:47:09Z\\\",\\\"message\\\":\\\"2025-10-04T04:46:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3\\\\n2025-10-04T04:46:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_192b97e3-28c9-480b-8479-f85e0aadaac3 to /host/opt/cni/bin/\\\\n2025-10-04T04:46:24Z [verbose] multus-daemon started\\\\n2025-10-04T04:46:24Z [verbose] Readiness Indicator file check\\\\n2025-10-04T04:47:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bcrl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jpj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.536537 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11ac83cd-2981-4717-8cb4-2ca3e302461a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-04T04:47:20Z\\\",\\\"message\\\":\\\"ork_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF1004 04:47:19.379071 6857 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:19Z is after 2025-08-24T17:21:41Z]\\\\nI1004 04:47:19.379086 6857 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1004 04:47:19.3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:47:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6czkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bw8lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.537554 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.537584 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.537597 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.537614 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.537650 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:33Z","lastTransitionTime":"2025-10-04T04:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.548583 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-895zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f50461a0-ea5a-4b08-a1ee-512ab8812dbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2edc817f12614d529d28bb0da9f062f2de5e935ee1fc9a811fb856943b47dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m7bh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-895zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.567755 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dbdf873-3edf-45a3-a3d1-af738d5e5710\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6deb91dd0598ee8544873b9fa3247fb3a35f6762cda04ad4275464cf1bacc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9625912c937949847677c068b2074f746f7c30975291c2384b04bae419d2e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fbb833debdac8cbac9be67c4cdb63182afdd81131ca8ee8e7c6e143c52daea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8a9460545bce75a6a692510c3aed07d88bf63c2a1f68bd7b4fe91bae5936d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd552f311b7af3a764061cafd0f0934d787f188000d95f122d9f070b4ec5fa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89aeb66f162bc596b8e7b158d889fedcc858e4ac24c4b86a81ffc749b454fd53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbce6325d973a7352271779695c8309f53d5ef7db2d91b19e5133d61f5c2dbf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce607ae39ae0c1d796aa423a44ba7f8725f86166ebe21355b11601ba72207af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.584232 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4ec5aab-1872-4c28-b4da-a45ba1ccf5a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd3b8bd5b77dea0cb67967c3dea23f7f85d0441ab16efb8e889b66efcc91908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed4c85b5e589d60e947092041f13a7eaf564f87c4e56e589d3c5c10d2e2f7e80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://383d62c72cb09709e8d852197ccd85f30e28ae240605d8da03ee52a90625878b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6202c92dce406bb18ba979e5fa963664397dcb4d05e8a5b067f10226476db979\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b232476677563a10e4af400b465c2ad23169fa272618be868abc561a50c8a3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-04T04:46:19Z\\\",\\\"message\\\":\\\"le observer\\\\nW1004 04:46:18.950339 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1004 04:46:18.950441 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1004 04:46:18.951476 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726344001/tls.crt::/tmp/serving-cert-2726344001/tls.key\\\\\\\"\\\\nI1004 04:46:19.153467 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1004 04:46:19.381438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1004 04:46:19.381469 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1004 04:46:19.381497 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1004 04:46:19.381504 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1004 04:46:19.662872 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1004 04:46:19.662883 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1004 04:46:19.662906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662912 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1004 04:46:19.662917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1004 04:46:19.662921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1004 04:46:19.662926 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1004 04:46:19.662929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1004 04:46:19.676665 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://511a3eb5af45de655b69d52fdb292c31a7d3950d4aa62e7c7aafdb7d91a977ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9a681eed74835c6d2b744346892b35c6744d4bc695b402dd00cdef293989cfc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:45:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:45:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:45:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.597804 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjmgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e180d740-f48b-4755-b3ad-088f40b010ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0676d0e8b4c0a84d7f3006195c93558ec443fd055650e082fe6c94ce531faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25tnf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjmgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.612362 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a371e2b-4a47-45ba-9141-dcd616fa19be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39db5c1c6ba2a99a512f8dba8e21f1e00bb3fe7d9448a221f1504d7a41e7ef68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daa1b182f1c45b6e1a914ea36b8f2c66f22fea12313708e656e3a801062c20a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvmdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rx2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.631996 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://910127f650eeadfbe230a695f30a648caa2b0afefd770d60fdf60363cf694d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.641090 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.641171 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.641196 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.641222 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.641243 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:33Z","lastTransitionTime":"2025-10-04T04:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.648099 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9bd9ef64070e71bf4fa38a9df18eca9de50c796ddd35f67d1a5e7f7958ac800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.670634 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gp55j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"951838a5-12ca-41a9-a0b2-df95499f89ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3e06cd061e4929531f2795a1bc2e801c1ab1c58b6e372663875aeea735ff67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d973503a1a8f2c3e39ebab8c0be5e90d10294f3b6901a8795733c65b78c884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9aacd6b1ae8dacc52de6c4cd6050c4dea162f9336dcb8ed6658b7c2b1802580\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://681e3b575c1c1c58ff268538ae33ec942eef4b2a5a3ed6da321d5a0b8681faee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed8ebd1002375f1d4e9e0fa8523bc980fad5757dc586589c6bc7aea10adc533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2e0ea91f3f56e2d0f6c44449730aa9f5168a4c3eca52be46047d137d2f1318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df9d21fe69c8791f2b7af17e43e48c4b7b8ee4ac7577396c4b3a420415b90a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-04T04:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-04T04:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2w78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gp55j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.689945 4802 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n27xq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d189ff1-3446-47fe-bcea-6b09e72a4567\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-04T04:46:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c27hr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-04T04:46:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n27xq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-04T04:47:33Z is after 2025-08-24T17:21:41Z" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.744118 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.744207 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.744219 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.744233 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.744242 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:33Z","lastTransitionTime":"2025-10-04T04:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.847455 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.847520 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.847561 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.847586 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.847604 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:33Z","lastTransitionTime":"2025-10-04T04:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.950806 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.950879 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.950897 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.950921 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:33 crc kubenswrapper[4802]: I1004 04:47:33.950938 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:33Z","lastTransitionTime":"2025-10-04T04:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.054353 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.054417 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.054435 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.054465 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.054485 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:34Z","lastTransitionTime":"2025-10-04T04:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.158077 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.158169 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.158187 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.158211 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.158231 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:34Z","lastTransitionTime":"2025-10-04T04:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.262009 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.262055 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.262066 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.262084 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.262096 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:34Z","lastTransitionTime":"2025-10-04T04:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.359561 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.359666 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:34 crc kubenswrapper[4802]: E1004 04:47:34.359777 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:34 crc kubenswrapper[4802]: E1004 04:47:34.359953 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.360470 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:34 crc kubenswrapper[4802]: E1004 04:47:34.360912 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.365406 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.365602 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.365800 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.365948 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.366072 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:34Z","lastTransitionTime":"2025-10-04T04:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.468980 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.469447 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.469590 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.469761 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.469901 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:34Z","lastTransitionTime":"2025-10-04T04:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.574115 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.574191 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.574249 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.574287 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.574311 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:34Z","lastTransitionTime":"2025-10-04T04:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.677950 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.678295 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.678384 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.678499 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.678596 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:34Z","lastTransitionTime":"2025-10-04T04:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.781356 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.781431 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.781452 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.781477 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.781496 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:34Z","lastTransitionTime":"2025-10-04T04:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.884911 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.884994 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.885018 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.885048 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.885072 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:34Z","lastTransitionTime":"2025-10-04T04:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.987951 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.988012 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.988033 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.988056 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:34 crc kubenswrapper[4802]: I1004 04:47:34.988071 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:34Z","lastTransitionTime":"2025-10-04T04:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.090967 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.091015 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.091023 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.091038 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.091048 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:35Z","lastTransitionTime":"2025-10-04T04:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.193774 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.193829 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.193842 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.193860 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.193873 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:35Z","lastTransitionTime":"2025-10-04T04:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.296623 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.296686 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.296697 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.296719 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.296733 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:35Z","lastTransitionTime":"2025-10-04T04:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.358745 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:35 crc kubenswrapper[4802]: E1004 04:47:35.358946 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.399401 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.399464 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.399476 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.399495 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.399508 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:35Z","lastTransitionTime":"2025-10-04T04:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.502332 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.502429 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.502447 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.502468 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.502484 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:35Z","lastTransitionTime":"2025-10-04T04:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.605005 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.605049 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.605060 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.605079 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.605092 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:35Z","lastTransitionTime":"2025-10-04T04:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.708380 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.708429 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.708440 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.708458 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.708472 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:35Z","lastTransitionTime":"2025-10-04T04:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.811705 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.811764 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.811782 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.811804 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.811817 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:35Z","lastTransitionTime":"2025-10-04T04:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.915420 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.915482 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.915494 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.915516 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:35 crc kubenswrapper[4802]: I1004 04:47:35.915530 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:35Z","lastTransitionTime":"2025-10-04T04:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.018714 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.018788 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.018802 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.018842 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.018856 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:36Z","lastTransitionTime":"2025-10-04T04:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.121435 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.121487 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.121498 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.121518 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.121531 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:36Z","lastTransitionTime":"2025-10-04T04:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.224673 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.224727 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.224736 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.224753 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.224769 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:36Z","lastTransitionTime":"2025-10-04T04:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.328204 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.328275 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.328289 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.328312 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.328326 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:36Z","lastTransitionTime":"2025-10-04T04:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.358852 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.358902 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.358965 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:36 crc kubenswrapper[4802]: E1004 04:47:36.359598 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:36 crc kubenswrapper[4802]: E1004 04:47:36.359735 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:36 crc kubenswrapper[4802]: E1004 04:47:36.359754 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.431898 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.432367 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.432466 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.432551 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.432664 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:36Z","lastTransitionTime":"2025-10-04T04:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.535281 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.535326 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.535339 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.535357 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.535370 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:36Z","lastTransitionTime":"2025-10-04T04:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.637552 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.637596 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.637609 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.637628 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.637661 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:36Z","lastTransitionTime":"2025-10-04T04:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.741431 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.741521 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.741546 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.741570 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.741590 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:36Z","lastTransitionTime":"2025-10-04T04:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.845323 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.845411 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.845430 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.845456 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.845474 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:36Z","lastTransitionTime":"2025-10-04T04:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.949024 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.949113 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.949131 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.949159 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.949178 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:36Z","lastTransitionTime":"2025-10-04T04:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:36 crc kubenswrapper[4802]: I1004 04:47:36.993185 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:36 crc kubenswrapper[4802]: E1004 04:47:36.993407 4802 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:47:36 crc kubenswrapper[4802]: E1004 04:47:36.993507 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs podName:0d189ff1-3446-47fe-bcea-6b09e72a4567 nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.993478527 +0000 UTC m=+163.401479222 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs") pod "network-metrics-daemon-n27xq" (UID: "0d189ff1-3446-47fe-bcea-6b09e72a4567") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.052175 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.052256 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.052278 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.052307 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.052329 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:37Z","lastTransitionTime":"2025-10-04T04:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.155190 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.155536 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.155674 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.155777 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.155838 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:37Z","lastTransitionTime":"2025-10-04T04:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.258453 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.258502 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.258512 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.258526 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.258537 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:37Z","lastTransitionTime":"2025-10-04T04:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.359437 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:37 crc kubenswrapper[4802]: E1004 04:47:37.360480 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.361261 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.361314 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.361329 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.361353 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.361366 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:37Z","lastTransitionTime":"2025-10-04T04:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.464222 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.464288 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.464300 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.464347 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.464364 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:37Z","lastTransitionTime":"2025-10-04T04:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.567405 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.567450 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.567460 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.567477 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.567488 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:37Z","lastTransitionTime":"2025-10-04T04:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.669983 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.670039 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.670052 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.670076 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.670090 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:37Z","lastTransitionTime":"2025-10-04T04:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.773113 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.773166 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.773182 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.773203 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.773220 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:37Z","lastTransitionTime":"2025-10-04T04:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.876480 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.876539 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.876554 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.876575 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.876591 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:37Z","lastTransitionTime":"2025-10-04T04:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.979875 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.979935 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.979949 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.979967 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:37 crc kubenswrapper[4802]: I1004 04:47:37.979981 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:37Z","lastTransitionTime":"2025-10-04T04:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.082960 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.083012 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.083023 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.083044 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.083053 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:38Z","lastTransitionTime":"2025-10-04T04:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.186719 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.187080 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.187163 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.187300 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.187550 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:38Z","lastTransitionTime":"2025-10-04T04:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.290621 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.290696 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.290711 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.290732 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.290747 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:38Z","lastTransitionTime":"2025-10-04T04:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.358921 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.359051 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:38 crc kubenswrapper[4802]: E1004 04:47:38.359150 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.359262 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:38 crc kubenswrapper[4802]: E1004 04:47:38.359555 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:38 crc kubenswrapper[4802]: E1004 04:47:38.359801 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.394597 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.395005 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.395071 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.395153 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.395213 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:38Z","lastTransitionTime":"2025-10-04T04:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.446151 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gp55j" podStartSLOduration=79.446116682 podStartE2EDuration="1m19.446116682s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:47:38.442515062 +0000 UTC m=+100.850515697" watchObservedRunningTime="2025-10-04 04:47:38.446116682 +0000 UTC m=+100.854117307" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.493355 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.493313699 podStartE2EDuration="6.493313699s" podCreationTimestamp="2025-10-04 04:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:47:38.477408819 +0000 UTC m=+100.885409474" watchObservedRunningTime="2025-10-04 04:47:38.493313699 +0000 UTC m=+100.901314324" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.499796 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.500115 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.500126 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.500145 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.500154 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:38Z","lastTransitionTime":"2025-10-04T04:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.524351 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podStartSLOduration=79.524320458 podStartE2EDuration="1m19.524320458s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:47:38.523834384 +0000 UTC m=+100.931835019" watchObservedRunningTime="2025-10-04 04:47:38.524320458 +0000 UTC m=+100.932321073" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.540350 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6jpj5" podStartSLOduration=79.540329571 podStartE2EDuration="1m19.540329571s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:47:38.539541889 +0000 UTC m=+100.947542514" watchObservedRunningTime="2025-10-04 04:47:38.540329571 +0000 UTC m=+100.948330196" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.580600 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-895zh" podStartSLOduration=79.580558615 podStartE2EDuration="1m19.580558615s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:47:38.580202795 +0000 UTC m=+100.988203420" watchObservedRunningTime="2025-10-04 04:47:38.580558615 +0000 UTC m=+100.988559270" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.603190 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.603242 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.603256 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.603274 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.603288 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:38Z","lastTransitionTime":"2025-10-04T04:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.619515 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.619489783 podStartE2EDuration="48.619489783s" podCreationTimestamp="2025-10-04 04:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:47:38.618605708 +0000 UTC m=+101.026606353" watchObservedRunningTime="2025-10-04 04:47:38.619489783 +0000 UTC m=+101.027490408" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.620079 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=80.620073789 podStartE2EDuration="1m20.620073789s" podCreationTimestamp="2025-10-04 04:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:47:38.599686435 +0000 UTC m=+101.007687090" watchObservedRunningTime="2025-10-04 04:47:38.620073789 +0000 UTC m=+101.028074414" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.652370 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rx2p" podStartSLOduration=79.652344383 podStartE2EDuration="1m19.652344383s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:47:38.652211819 +0000 UTC m=+101.060212444" watchObservedRunningTime="2025-10-04 04:47:38.652344383 +0000 UTC m=+101.060345008" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.679283 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=77.679257168 podStartE2EDuration="1m17.679257168s" podCreationTimestamp="2025-10-04 04:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:47:38.678572149 +0000 UTC m=+101.086572784" watchObservedRunningTime="2025-10-04 04:47:38.679257168 +0000 UTC m=+101.087257793" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.695163 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.695139308 podStartE2EDuration="1m18.695139308s" podCreationTimestamp="2025-10-04 04:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:47:38.694784608 +0000 UTC m=+101.102785263" watchObservedRunningTime="2025-10-04 04:47:38.695139308 +0000 UTC m=+101.103139933" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.707297 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.707357 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.707369 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.707391 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.707405 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:38Z","lastTransitionTime":"2025-10-04T04:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.810171 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.810242 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.810261 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.810291 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.810309 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:38Z","lastTransitionTime":"2025-10-04T04:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.919101 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.919138 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.919148 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.919165 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:38 crc kubenswrapper[4802]: I1004 04:47:38.919176 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:38Z","lastTransitionTime":"2025-10-04T04:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.022272 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.022910 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.023049 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.023145 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.023225 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:39Z","lastTransitionTime":"2025-10-04T04:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.126122 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.126416 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.126509 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.126654 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.126746 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:39Z","lastTransitionTime":"2025-10-04T04:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.229990 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.230305 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.230418 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.230513 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.230742 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:39Z","lastTransitionTime":"2025-10-04T04:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.334437 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.334491 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.334502 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.334527 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.334540 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:39Z","lastTransitionTime":"2025-10-04T04:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.359895 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:39 crc kubenswrapper[4802]: E1004 04:47:39.360073 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.437915 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.437963 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.437997 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.438045 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.438058 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:39Z","lastTransitionTime":"2025-10-04T04:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.540973 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.541014 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.541027 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.541045 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.541057 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:39Z","lastTransitionTime":"2025-10-04T04:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.645634 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.645732 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.645759 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.645789 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.645998 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:39Z","lastTransitionTime":"2025-10-04T04:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.749391 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.749449 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.749463 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.749506 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.749521 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:39Z","lastTransitionTime":"2025-10-04T04:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.852848 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.852933 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.852948 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.852966 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.852978 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:39Z","lastTransitionTime":"2025-10-04T04:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.956376 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.956922 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.957034 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.957149 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:39 crc kubenswrapper[4802]: I1004 04:47:39.957252 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:39Z","lastTransitionTime":"2025-10-04T04:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.065515 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.066319 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.066414 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.066585 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.066709 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:40Z","lastTransitionTime":"2025-10-04T04:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.170698 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.170771 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.170795 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.170821 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.170840 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:40Z","lastTransitionTime":"2025-10-04T04:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.273863 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.274293 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.274380 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.274460 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.274552 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:40Z","lastTransitionTime":"2025-10-04T04:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.359008 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:40 crc kubenswrapper[4802]: E1004 04:47:40.359540 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.359895 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:40 crc kubenswrapper[4802]: E1004 04:47:40.360004 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.360180 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:40 crc kubenswrapper[4802]: E1004 04:47:40.360376 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.377528 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.377569 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.377582 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.377598 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.377607 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:40Z","lastTransitionTime":"2025-10-04T04:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.479923 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.479959 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.479970 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.479986 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.479999 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:40Z","lastTransitionTime":"2025-10-04T04:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.583091 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.583516 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.583591 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.583690 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.583986 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:40Z","lastTransitionTime":"2025-10-04T04:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.629236 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.629276 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.629287 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.629304 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.629316 4802 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-04T04:47:40Z","lastTransitionTime":"2025-10-04T04:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.684969 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fjmgk" podStartSLOduration=82.684923715 podStartE2EDuration="1m22.684923715s" podCreationTimestamp="2025-10-04 04:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:47:38.710358429 +0000 UTC m=+101.118359054" watchObservedRunningTime="2025-10-04 04:47:40.684923715 +0000 UTC m=+103.092924340" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.687399 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7"] Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.688115 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.690694 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.691140 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.691841 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.693037 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.743377 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af9a6d4c-16a5-432e-9734-415ba48679b4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.743506 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/af9a6d4c-16a5-432e-9734-415ba48679b4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.743580 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af9a6d4c-16a5-432e-9734-415ba48679b4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.743626 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/af9a6d4c-16a5-432e-9734-415ba48679b4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.743710 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/af9a6d4c-16a5-432e-9734-415ba48679b4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.844873 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/af9a6d4c-16a5-432e-9734-415ba48679b4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.844940 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/af9a6d4c-16a5-432e-9734-415ba48679b4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.844978 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af9a6d4c-16a5-432e-9734-415ba48679b4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.845043 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/af9a6d4c-16a5-432e-9734-415ba48679b4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.845078 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af9a6d4c-16a5-432e-9734-415ba48679b4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.845234 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/af9a6d4c-16a5-432e-9734-415ba48679b4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.845309 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/af9a6d4c-16a5-432e-9734-415ba48679b4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.846364 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/af9a6d4c-16a5-432e-9734-415ba48679b4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.856212 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af9a6d4c-16a5-432e-9734-415ba48679b4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:40 crc kubenswrapper[4802]: I1004 04:47:40.876073 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af9a6d4c-16a5-432e-9734-415ba48679b4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hlkq7\" (UID: \"af9a6d4c-16a5-432e-9734-415ba48679b4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:41 crc kubenswrapper[4802]: I1004 04:47:41.004744 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" Oct 04 04:47:41 crc kubenswrapper[4802]: I1004 04:47:41.359438 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:41 crc kubenswrapper[4802]: E1004 04:47:41.360127 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:42 crc kubenswrapper[4802]: I1004 04:47:42.015288 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" event={"ID":"af9a6d4c-16a5-432e-9734-415ba48679b4","Type":"ContainerStarted","Data":"c431cbc389e1b415ada5c072d47fac5798da2628b983d9caa3fd5669e4fb8c71"} Oct 04 04:47:42 crc kubenswrapper[4802]: I1004 04:47:42.015384 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" event={"ID":"af9a6d4c-16a5-432e-9734-415ba48679b4","Type":"ContainerStarted","Data":"738ec1b258d76e48fefc42bf49e6029bf39b1c53a0dcd7c99b91c2c8736b39b8"} Oct 04 04:47:42 crc kubenswrapper[4802]: I1004 04:47:42.358853 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:42 crc kubenswrapper[4802]: I1004 04:47:42.358955 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:42 crc kubenswrapper[4802]: I1004 04:47:42.359069 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:42 crc kubenswrapper[4802]: E1004 04:47:42.359209 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:42 crc kubenswrapper[4802]: E1004 04:47:42.359403 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:42 crc kubenswrapper[4802]: E1004 04:47:42.359553 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:43 crc kubenswrapper[4802]: I1004 04:47:43.359143 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:43 crc kubenswrapper[4802]: E1004 04:47:43.359344 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:44 crc kubenswrapper[4802]: I1004 04:47:44.358969 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:44 crc kubenswrapper[4802]: I1004 04:47:44.359071 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:44 crc kubenswrapper[4802]: I1004 04:47:44.359188 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:44 crc kubenswrapper[4802]: E1004 04:47:44.360384 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:44 crc kubenswrapper[4802]: E1004 04:47:44.360471 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:44 crc kubenswrapper[4802]: E1004 04:47:44.360659 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:45 crc kubenswrapper[4802]: I1004 04:47:45.359020 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:45 crc kubenswrapper[4802]: E1004 04:47:45.359822 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:46 crc kubenswrapper[4802]: I1004 04:47:46.359514 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:46 crc kubenswrapper[4802]: I1004 04:47:46.359627 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:46 crc kubenswrapper[4802]: E1004 04:47:46.359788 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:46 crc kubenswrapper[4802]: I1004 04:47:46.360047 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:46 crc kubenswrapper[4802]: E1004 04:47:46.360161 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:46 crc kubenswrapper[4802]: E1004 04:47:46.360386 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:46 crc kubenswrapper[4802]: I1004 04:47:46.360621 4802 scope.go:117] "RemoveContainer" containerID="29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c" Oct 04 04:47:46 crc kubenswrapper[4802]: E1004 04:47:46.360822 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" Oct 04 04:47:47 crc kubenswrapper[4802]: I1004 04:47:47.359704 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:47 crc kubenswrapper[4802]: E1004 04:47:47.359864 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:48 crc kubenswrapper[4802]: I1004 04:47:48.358744 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:48 crc kubenswrapper[4802]: I1004 04:47:48.358744 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:48 crc kubenswrapper[4802]: I1004 04:47:48.358874 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:48 crc kubenswrapper[4802]: E1004 04:47:48.360281 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:48 crc kubenswrapper[4802]: E1004 04:47:48.360441 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:48 crc kubenswrapper[4802]: E1004 04:47:48.360561 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:49 crc kubenswrapper[4802]: I1004 04:47:49.359353 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:49 crc kubenswrapper[4802]: E1004 04:47:49.359506 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:50 crc kubenswrapper[4802]: I1004 04:47:50.359215 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:50 crc kubenswrapper[4802]: I1004 04:47:50.359349 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:50 crc kubenswrapper[4802]: E1004 04:47:50.359405 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:50 crc kubenswrapper[4802]: I1004 04:47:50.359409 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:50 crc kubenswrapper[4802]: E1004 04:47:50.359584 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:50 crc kubenswrapper[4802]: E1004 04:47:50.359702 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:51 crc kubenswrapper[4802]: I1004 04:47:51.359240 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:51 crc kubenswrapper[4802]: E1004 04:47:51.359437 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:52 crc kubenswrapper[4802]: I1004 04:47:52.359187 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:52 crc kubenswrapper[4802]: I1004 04:47:52.359290 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:52 crc kubenswrapper[4802]: I1004 04:47:52.359299 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:52 crc kubenswrapper[4802]: E1004 04:47:52.359439 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:52 crc kubenswrapper[4802]: E1004 04:47:52.359568 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:52 crc kubenswrapper[4802]: E1004 04:47:52.359716 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:53 crc kubenswrapper[4802]: I1004 04:47:53.359064 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:53 crc kubenswrapper[4802]: E1004 04:47:53.359264 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:54 crc kubenswrapper[4802]: I1004 04:47:54.358725 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:54 crc kubenswrapper[4802]: I1004 04:47:54.358725 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:54 crc kubenswrapper[4802]: I1004 04:47:54.358865 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:54 crc kubenswrapper[4802]: E1004 04:47:54.358926 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:54 crc kubenswrapper[4802]: E1004 04:47:54.358873 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:54 crc kubenswrapper[4802]: E1004 04:47:54.359037 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:55 crc kubenswrapper[4802]: I1004 04:47:55.358879 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:55 crc kubenswrapper[4802]: E1004 04:47:55.359052 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:56 crc kubenswrapper[4802]: I1004 04:47:56.063222 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jpj5_c1c56664-b32b-475a-89eb-55910da58338/kube-multus/1.log" Oct 04 04:47:56 crc kubenswrapper[4802]: I1004 04:47:56.064128 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jpj5_c1c56664-b32b-475a-89eb-55910da58338/kube-multus/0.log" Oct 04 04:47:56 crc kubenswrapper[4802]: I1004 04:47:56.064178 4802 generic.go:334] "Generic (PLEG): container finished" podID="c1c56664-b32b-475a-89eb-55910da58338" containerID="703d0cda61f01b5ea4b0c1fa1dd68ddcd65f8bc00b22022bbb3a8c77a11425a2" exitCode=1 Oct 04 04:47:56 crc kubenswrapper[4802]: I1004 04:47:56.064215 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jpj5" event={"ID":"c1c56664-b32b-475a-89eb-55910da58338","Type":"ContainerDied","Data":"703d0cda61f01b5ea4b0c1fa1dd68ddcd65f8bc00b22022bbb3a8c77a11425a2"} Oct 04 04:47:56 crc kubenswrapper[4802]: I1004 04:47:56.064276 4802 scope.go:117] "RemoveContainer" containerID="8a7507a506120da8a87f9e191751526bc2f55d08d03ebd7160ae2d09e69ba81c" Oct 04 04:47:56 crc kubenswrapper[4802]: I1004 04:47:56.064724 4802 scope.go:117] "RemoveContainer" containerID="703d0cda61f01b5ea4b0c1fa1dd68ddcd65f8bc00b22022bbb3a8c77a11425a2" Oct 04 04:47:56 crc kubenswrapper[4802]: E1004 04:47:56.064906 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-6jpj5_openshift-multus(c1c56664-b32b-475a-89eb-55910da58338)\"" pod="openshift-multus/multus-6jpj5" podUID="c1c56664-b32b-475a-89eb-55910da58338" Oct 04 04:47:56 crc kubenswrapper[4802]: I1004 04:47:56.082525 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlkq7" podStartSLOduration=97.082501288 podStartE2EDuration="1m37.082501288s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:47:42.037625351 +0000 UTC m=+104.445625986" watchObservedRunningTime="2025-10-04 04:47:56.082501288 +0000 UTC m=+118.490501913" Oct 04 04:47:56 crc kubenswrapper[4802]: I1004 04:47:56.359593 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:56 crc kubenswrapper[4802]: I1004 04:47:56.359732 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:56 crc kubenswrapper[4802]: E1004 04:47:56.359800 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:56 crc kubenswrapper[4802]: I1004 04:47:56.359753 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:56 crc kubenswrapper[4802]: E1004 04:47:56.359930 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:56 crc kubenswrapper[4802]: E1004 04:47:56.360043 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:57 crc kubenswrapper[4802]: I1004 04:47:57.070465 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jpj5_c1c56664-b32b-475a-89eb-55910da58338/kube-multus/1.log" Oct 04 04:47:57 crc kubenswrapper[4802]: I1004 04:47:57.359529 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:57 crc kubenswrapper[4802]: E1004 04:47:57.359799 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:47:58 crc kubenswrapper[4802]: I1004 04:47:58.358749 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:47:58 crc kubenswrapper[4802]: I1004 04:47:58.358897 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:47:58 crc kubenswrapper[4802]: I1004 04:47:58.358991 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:47:58 crc kubenswrapper[4802]: E1004 04:47:58.360572 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:47:58 crc kubenswrapper[4802]: I1004 04:47:58.360771 4802 scope.go:117] "RemoveContainer" containerID="29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c" Oct 04 04:47:58 crc kubenswrapper[4802]: E1004 04:47:58.360846 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:47:58 crc kubenswrapper[4802]: E1004 04:47:58.360960 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bw8lw_openshift-ovn-kubernetes(11ac83cd-2981-4717-8cb4-2ca3e302461a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" Oct 04 04:47:58 crc kubenswrapper[4802]: E1004 04:47:58.361092 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:47:58 crc kubenswrapper[4802]: E1004 04:47:58.383121 4802 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 04 04:47:58 crc kubenswrapper[4802]: E1004 04:47:58.490299 4802 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 04 04:47:59 crc kubenswrapper[4802]: I1004 04:47:59.358990 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:47:59 crc kubenswrapper[4802]: E1004 04:47:59.359178 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:48:00 crc kubenswrapper[4802]: I1004 04:48:00.359870 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:48:00 crc kubenswrapper[4802]: I1004 04:48:00.359945 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:48:00 crc kubenswrapper[4802]: I1004 04:48:00.360026 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:48:00 crc kubenswrapper[4802]: E1004 04:48:00.360126 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:48:00 crc kubenswrapper[4802]: E1004 04:48:00.360237 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:48:00 crc kubenswrapper[4802]: E1004 04:48:00.360451 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:48:01 crc kubenswrapper[4802]: I1004 04:48:01.359030 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:48:01 crc kubenswrapper[4802]: E1004 04:48:01.359277 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:48:02 crc kubenswrapper[4802]: I1004 04:48:02.358935 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:48:02 crc kubenswrapper[4802]: I1004 04:48:02.359026 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:48:02 crc kubenswrapper[4802]: I1004 04:48:02.358969 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:48:02 crc kubenswrapper[4802]: E1004 04:48:02.359155 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:48:02 crc kubenswrapper[4802]: E1004 04:48:02.359367 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:48:02 crc kubenswrapper[4802]: E1004 04:48:02.359510 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:48:03 crc kubenswrapper[4802]: I1004 04:48:03.359626 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:48:03 crc kubenswrapper[4802]: E1004 04:48:03.359845 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:48:03 crc kubenswrapper[4802]: E1004 04:48:03.491746 4802 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 04 04:48:04 crc kubenswrapper[4802]: I1004 04:48:04.359269 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:48:04 crc kubenswrapper[4802]: I1004 04:48:04.359277 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:48:04 crc kubenswrapper[4802]: I1004 04:48:04.359454 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:48:04 crc kubenswrapper[4802]: E1004 04:48:04.359580 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:48:04 crc kubenswrapper[4802]: E1004 04:48:04.359773 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:48:04 crc kubenswrapper[4802]: E1004 04:48:04.359988 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:48:05 crc kubenswrapper[4802]: I1004 04:48:05.358934 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:48:05 crc kubenswrapper[4802]: E1004 04:48:05.359116 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:48:06 crc kubenswrapper[4802]: I1004 04:48:06.358929 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:48:06 crc kubenswrapper[4802]: I1004 04:48:06.359017 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:48:06 crc kubenswrapper[4802]: E1004 04:48:06.359109 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:48:06 crc kubenswrapper[4802]: I1004 04:48:06.359152 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:48:06 crc kubenswrapper[4802]: E1004 04:48:06.359342 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:48:06 crc kubenswrapper[4802]: E1004 04:48:06.359462 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:48:07 crc kubenswrapper[4802]: I1004 04:48:07.358961 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:48:07 crc kubenswrapper[4802]: E1004 04:48:07.359174 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:48:08 crc kubenswrapper[4802]: I1004 04:48:08.359272 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:48:08 crc kubenswrapper[4802]: I1004 04:48:08.359272 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:48:08 crc kubenswrapper[4802]: I1004 04:48:08.359503 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:48:08 crc kubenswrapper[4802]: E1004 04:48:08.360827 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:48:08 crc kubenswrapper[4802]: E1004 04:48:08.360971 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:48:08 crc kubenswrapper[4802]: E1004 04:48:08.361081 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:48:08 crc kubenswrapper[4802]: E1004 04:48:08.492395 4802 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 04 04:48:09 crc kubenswrapper[4802]: I1004 04:48:09.358907 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:48:09 crc kubenswrapper[4802]: E1004 04:48:09.359364 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:48:09 crc kubenswrapper[4802]: I1004 04:48:09.359550 4802 scope.go:117] "RemoveContainer" containerID="703d0cda61f01b5ea4b0c1fa1dd68ddcd65f8bc00b22022bbb3a8c77a11425a2" Oct 04 04:48:10 crc kubenswrapper[4802]: I1004 04:48:10.122081 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jpj5_c1c56664-b32b-475a-89eb-55910da58338/kube-multus/1.log" Oct 04 04:48:10 crc kubenswrapper[4802]: I1004 04:48:10.122155 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jpj5" event={"ID":"c1c56664-b32b-475a-89eb-55910da58338","Type":"ContainerStarted","Data":"6e875be304c527cc69671b43e27d94fc9ad734c107c7e13038c19544996014da"} Oct 04 04:48:10 crc kubenswrapper[4802]: I1004 04:48:10.359965 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:48:10 crc kubenswrapper[4802]: I1004 04:48:10.360081 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:48:10 crc kubenswrapper[4802]: E1004 04:48:10.360506 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:48:10 crc kubenswrapper[4802]: I1004 04:48:10.360601 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:48:10 crc kubenswrapper[4802]: E1004 04:48:10.361241 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:48:10 crc kubenswrapper[4802]: E1004 04:48:10.361728 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:48:10 crc kubenswrapper[4802]: I1004 04:48:10.362005 4802 scope.go:117] "RemoveContainer" containerID="29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c" Oct 04 04:48:11 crc kubenswrapper[4802]: I1004 04:48:11.128128 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/3.log" Oct 04 04:48:11 crc kubenswrapper[4802]: I1004 04:48:11.131850 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerStarted","Data":"4f0d732429f19770b0817692452930258fb7fe8a6f169bffd3e2405933193dab"} Oct 04 04:48:11 crc kubenswrapper[4802]: I1004 04:48:11.132291 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:48:11 crc kubenswrapper[4802]: I1004 04:48:11.162359 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podStartSLOduration=112.162333545 podStartE2EDuration="1m52.162333545s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:11.161803369 +0000 UTC m=+133.569804014" watchObservedRunningTime="2025-10-04 04:48:11.162333545 +0000 UTC m=+133.570334170" Oct 04 04:48:11 crc kubenswrapper[4802]: I1004 04:48:11.360524 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:48:11 crc kubenswrapper[4802]: E1004 04:48:11.360714 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:48:11 crc kubenswrapper[4802]: I1004 04:48:11.371794 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n27xq"] Oct 04 04:48:11 crc kubenswrapper[4802]: I1004 04:48:11.371922 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:48:11 crc kubenswrapper[4802]: E1004 04:48:11.372017 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:48:12 crc kubenswrapper[4802]: I1004 04:48:12.359436 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:48:12 crc kubenswrapper[4802]: E1004 04:48:12.359613 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 04 04:48:12 crc kubenswrapper[4802]: I1004 04:48:12.359981 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:48:12 crc kubenswrapper[4802]: E1004 04:48:12.360166 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 04 04:48:13 crc kubenswrapper[4802]: I1004 04:48:13.358878 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:48:13 crc kubenswrapper[4802]: I1004 04:48:13.358909 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:48:13 crc kubenswrapper[4802]: E1004 04:48:13.359035 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n27xq" podUID="0d189ff1-3446-47fe-bcea-6b09e72a4567" Oct 04 04:48:13 crc kubenswrapper[4802]: E1004 04:48:13.359136 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 04 04:48:14 crc kubenswrapper[4802]: I1004 04:48:14.359521 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:48:14 crc kubenswrapper[4802]: I1004 04:48:14.360356 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:48:14 crc kubenswrapper[4802]: I1004 04:48:14.363007 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 04 04:48:14 crc kubenswrapper[4802]: I1004 04:48:14.363397 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 04 04:48:15 crc kubenswrapper[4802]: I1004 04:48:15.359252 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:48:15 crc kubenswrapper[4802]: I1004 04:48:15.359323 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:48:15 crc kubenswrapper[4802]: I1004 04:48:15.362242 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 04 04:48:15 crc kubenswrapper[4802]: I1004 04:48:15.362963 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 04 04:48:15 crc kubenswrapper[4802]: I1004 04:48:15.363160 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 04 04:48:15 crc kubenswrapper[4802]: I1004 04:48:15.365553 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.817687 4802 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.889626 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g7767"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.890199 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.890549 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.890907 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.892354 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7qhp4"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.892940 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.893286 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7w462"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.894260 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.895576 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.895797 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.896078 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.896180 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.896309 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.899215 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.903312 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.903750 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.906047 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.910334 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.910708 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.910949 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.911496 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.911838 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.911894 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.912031 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.912031 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.912244 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.912277 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.912991 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.913139 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.913448 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.913506 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.913608 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-t24w4"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.913783 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.913840 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.913974 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.913996 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.914050 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.914095 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.914397 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.914862 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.915340 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.915346 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.916103 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-q9tg7"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.916708 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-q9tg7" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.917139 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.917709 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.918917 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6fwp2"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.919317 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.923482 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ch4cq"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.936708 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.961512 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lm8jn"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.962248 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.962617 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-skrqg"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.963190 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.963215 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.963331 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.965023 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.967817 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.967949 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.968055 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.968219 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.968373 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.968869 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.969130 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.969738 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.969931 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.969970 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.970080 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.970156 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.970216 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.970317 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.970337 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.970446 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.970470 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.971983 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.972241 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.972505 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.972724 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.972919 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.973287 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.974121 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.974546 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.975932 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.975927 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.976577 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/477b9cdc-eacf-45b7-b79f-dccfe481edc6-serving-cert\") pod \"openshift-config-operator-7777fb866f-7w462\" (UID: \"477b9cdc-eacf-45b7-b79f-dccfe481edc6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.977694 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwq6m\" (UniqueName: \"kubernetes.io/projected/d71a3a0a-18b5-4783-8887-79f76803a121-kube-api-access-hwq6m\") pod \"openshift-apiserver-operator-796bbdcf4f-dttjb\" (UID: \"d71a3a0a-18b5-4783-8887-79f76803a121\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.977749 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-64gm5\" (UID: \"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.977782 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7929a11-35ca-4d0c-9e5b-25c105355711-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ck7kq\" (UID: \"e7929a11-35ca-4d0c-9e5b-25c105355711\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.977812 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77fac432-21bd-4251-bb24-320cc71f536c-serving-cert\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.977837 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9f7844-b732-44d3-96a3-3cc28364fac8-serving-cert\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.977863 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25q7d\" (UniqueName: \"kubernetes.io/projected/bf4310b6-043e-47e5-8519-9a513fb8da48-kube-api-access-25q7d\") pod \"route-controller-manager-6576b87f9c-bw7xx\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.977887 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-audit\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.977987 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-serving-cert\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978054 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-image-import-ca\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978110 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-64gm5\" (UID: \"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978139 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-config\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978168 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978252 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/477b9cdc-eacf-45b7-b79f-dccfe481edc6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7w462\" (UID: \"477b9cdc-eacf-45b7-b79f-dccfe481edc6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978288 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69012085-b35b-4167-aa20-cccec63cdda2-service-ca-bundle\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978314 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b9f7844-b732-44d3-96a3-3cc28364fac8-encryption-config\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978344 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-64gm5\" (UID: \"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978369 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978394 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-oauth-serving-cert\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978419 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7929a11-35ca-4d0c-9e5b-25c105355711-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ck7kq\" (UID: \"e7929a11-35ca-4d0c-9e5b-25c105355711\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978441 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0b9f7844-b732-44d3-96a3-3cc28364fac8-node-pullsecrets\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978468 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-config\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978493 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf4310b6-043e-47e5-8519-9a513fb8da48-client-ca\") pod \"route-controller-manager-6576b87f9c-bw7xx\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978527 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-etcd-serving-ca\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978576 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2tcf\" (UniqueName: \"kubernetes.io/projected/eee3cf4f-0b25-4641-865e-8f8101256453-kube-api-access-q2tcf\") pod \"downloads-7954f5f757-q9tg7\" (UID: \"eee3cf4f-0b25-4641-865e-8f8101256453\") " pod="openshift-console/downloads-7954f5f757-q9tg7" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978629 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-trusted-ca-bundle\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978677 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69012085-b35b-4167-aa20-cccec63cdda2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978712 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4310b6-043e-47e5-8519-9a513fb8da48-config\") pod \"route-controller-manager-6576b87f9c-bw7xx\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978744 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b9f7844-b732-44d3-96a3-3cc28364fac8-audit-dir\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978765 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npm52\" (UniqueName: \"kubernetes.io/projected/0b9f7844-b732-44d3-96a3-3cc28364fac8-kube-api-access-npm52\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978799 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjbqt\" (UniqueName: \"kubernetes.io/projected/69012085-b35b-4167-aa20-cccec63cdda2-kube-api-access-zjbqt\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978821 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71a3a0a-18b5-4783-8887-79f76803a121-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dttjb\" (UID: \"d71a3a0a-18b5-4783-8887-79f76803a121\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978841 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984mg\" (UniqueName: \"kubernetes.io/projected/77fac432-21bd-4251-bb24-320cc71f536c-kube-api-access-984mg\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978864 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-config\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978882 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf4310b6-043e-47e5-8519-9a513fb8da48-serving-cert\") pod \"route-controller-manager-6576b87f9c-bw7xx\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.978957 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69012085-b35b-4167-aa20-cccec63cdda2-config\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.979022 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-oauth-config\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.979047 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b9f7844-b732-44d3-96a3-3cc28364fac8-etcd-client\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.979072 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xfp8\" (UniqueName: \"kubernetes.io/projected/477b9cdc-eacf-45b7-b79f-dccfe481edc6-kube-api-access-9xfp8\") pod \"openshift-config-operator-7777fb866f-7w462\" (UID: \"477b9cdc-eacf-45b7-b79f-dccfe481edc6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.979093 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrvzl\" (UniqueName: \"kubernetes.io/projected/64860eca-743c-423a-8ee4-a1e5fd4f667d-kube-api-access-rrvzl\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.979111 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-service-ca\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.979127 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrjsp\" (UniqueName: \"kubernetes.io/projected/a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa-kube-api-access-xrjsp\") pod \"cluster-image-registry-operator-dc59b4c8b-64gm5\" (UID: \"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.979146 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71a3a0a-18b5-4783-8887-79f76803a121-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dttjb\" (UID: \"d71a3a0a-18b5-4783-8887-79f76803a121\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.979165 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69012085-b35b-4167-aa20-cccec63cdda2-serving-cert\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.979193 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-client-ca\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.979215 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7929a11-35ca-4d0c-9e5b-25c105355711-config\") pod \"kube-apiserver-operator-766d6c64bb-ck7kq\" (UID: \"e7929a11-35ca-4d0c-9e5b-25c105355711\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.979689 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.980424 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.981327 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.981744 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.982979 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.983702 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.988191 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.989113 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.991376 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wdmm2"] Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.991544 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 04 04:48:21 crc kubenswrapper[4802]: I1004 04:48:21.992550 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wdmm2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.000080 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.001721 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.001914 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.004174 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-f7dzh"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.004843 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.006492 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.007075 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.010388 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.011136 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.015560 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.016080 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-kcw54"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.016345 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nzxp7"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.019740 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.023767 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.025991 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.026460 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.026572 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzxp7" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.026820 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.026891 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.033990 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.041618 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.042234 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.043879 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.046500 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.048400 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-shvxg"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.052862 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9vmtw"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.053957 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.054801 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.059526 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.072030 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vmtw" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.073072 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.073683 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.074677 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cs42x"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.075356 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.075805 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.076325 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.079747 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-btf7l"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.080879 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.081729 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.081857 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.082673 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8307648c-2f7f-4558-aa5f-b629e157221d-audit-dir\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.082743 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/477b9cdc-eacf-45b7-b79f-dccfe481edc6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7w462\" (UID: \"477b9cdc-eacf-45b7-b79f-dccfe481edc6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.082776 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-audit-policies\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.082805 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.082842 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69012085-b35b-4167-aa20-cccec63cdda2-service-ca-bundle\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.082977 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b9f7844-b732-44d3-96a3-3cc28364fac8-encryption-config\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083023 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a011efc4-8846-45fd-8f1a-27d5907889bf-config\") pod \"console-operator-58897d9998-skrqg\" (UID: \"a011efc4-8846-45fd-8f1a-27d5907889bf\") " pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083055 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-64gm5\" (UID: \"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083157 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083200 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-oauth-serving-cert\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083233 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7929a11-35ca-4d0c-9e5b-25c105355711-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ck7kq\" (UID: \"e7929a11-35ca-4d0c-9e5b-25c105355711\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083262 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0b9f7844-b732-44d3-96a3-3cc28364fac8-node-pullsecrets\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083292 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083317 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b6c8f8a-62a2-4a40-85f9-2fc713b2822a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-blq8p\" (UID: \"8b6c8f8a-62a2-4a40-85f9-2fc713b2822a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083346 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083384 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2tcf\" (UniqueName: \"kubernetes.io/projected/eee3cf4f-0b25-4641-865e-8f8101256453-kube-api-access-q2tcf\") pod \"downloads-7954f5f757-q9tg7\" (UID: \"eee3cf4f-0b25-4641-865e-8f8101256453\") " pod="openshift-console/downloads-7954f5f757-q9tg7" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083412 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-config\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083439 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf4310b6-043e-47e5-8519-9a513fb8da48-client-ca\") pod \"route-controller-manager-6576b87f9c-bw7xx\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083464 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-etcd-serving-ca\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083493 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8f8k\" (UniqueName: \"kubernetes.io/projected/0452346e-4ae6-4944-8203-fbf3c3273223-kube-api-access-d8f8k\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083521 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlgz4\" (UniqueName: \"kubernetes.io/projected/ac6100d3-2668-4b1e-a78a-6f0703eca64a-kube-api-access-jlgz4\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083552 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b2e4c58d-96fa-407f-9563-99d74e773bac-etcd-service-ca\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083580 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a011efc4-8846-45fd-8f1a-27d5907889bf-trusted-ca\") pod \"console-operator-58897d9998-skrqg\" (UID: \"a011efc4-8846-45fd-8f1a-27d5907889bf\") " pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083613 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-trusted-ca-bundle\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083663 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69012085-b35b-4167-aa20-cccec63cdda2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083690 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4310b6-043e-47e5-8519-9a513fb8da48-config\") pod \"route-controller-manager-6576b87f9c-bw7xx\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083718 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f5d3f6c-6b78-44d8-826a-e49742556aaa-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lm8jn\" (UID: \"6f5d3f6c-6b78-44d8-826a-e49742556aaa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083745 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a011efc4-8846-45fd-8f1a-27d5907889bf-serving-cert\") pod \"console-operator-58897d9998-skrqg\" (UID: \"a011efc4-8846-45fd-8f1a-27d5907889bf\") " pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083773 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38bafc74-f498-4f4e-9b1d-5fbacfad12e8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p24bw\" (UID: \"38bafc74-f498-4f4e-9b1d-5fbacfad12e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083803 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b9f7844-b732-44d3-96a3-3cc28364fac8-audit-dir\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083833 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npm52\" (UniqueName: \"kubernetes.io/projected/0b9f7844-b732-44d3-96a3-3cc28364fac8-kube-api-access-npm52\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083876 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0452346e-4ae6-4944-8203-fbf3c3273223-metrics-certs\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083906 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8307648c-2f7f-4558-aa5f-b629e157221d-etcd-client\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083941 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8307648c-2f7f-4558-aa5f-b629e157221d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.083974 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c63f0f-adca-43d1-832d-503873c327c3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-72q7l\" (UID: \"82c63f0f-adca-43d1-832d-503873c327c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084006 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p42vh\" (UniqueName: \"kubernetes.io/projected/8b6c8f8a-62a2-4a40-85f9-2fc713b2822a-kube-api-access-p42vh\") pod \"cluster-samples-operator-665b6dd947-blq8p\" (UID: \"8b6c8f8a-62a2-4a40-85f9-2fc713b2822a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084040 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8307648c-2f7f-4558-aa5f-b629e157221d-encryption-config\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084071 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084108 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjbqt\" (UniqueName: \"kubernetes.io/projected/69012085-b35b-4167-aa20-cccec63cdda2-kube-api-access-zjbqt\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084140 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71a3a0a-18b5-4783-8887-79f76803a121-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dttjb\" (UID: \"d71a3a0a-18b5-4783-8887-79f76803a121\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084176 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084206 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/92749979-4252-4f0e-a763-3db89c2a396c-images\") pod \"machine-config-operator-74547568cd-bhrfs\" (UID: \"92749979-4252-4f0e-a763-3db89c2a396c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084240 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-984mg\" (UniqueName: \"kubernetes.io/projected/77fac432-21bd-4251-bb24-320cc71f536c-kube-api-access-984mg\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084269 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084298 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e4c58d-96fa-407f-9563-99d74e773bac-config\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084326 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-config\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084354 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf4310b6-043e-47e5-8519-9a513fb8da48-serving-cert\") pod \"route-controller-manager-6576b87f9c-bw7xx\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084383 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0452346e-4ae6-4944-8203-fbf3c3273223-stats-auth\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084409 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c9388f41-af8d-4194-a8d7-d32733cb786f-metrics-tls\") pod \"ingress-operator-5b745b69d9-n76dh\" (UID: \"c9388f41-af8d-4194-a8d7-d32733cb786f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084434 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msqvq\" (UniqueName: \"kubernetes.io/projected/ec973550-1440-4e1e-bcbd-34e56eae457b-kube-api-access-msqvq\") pod \"machine-approver-56656f9798-ztwc4\" (UID: \"ec973550-1440-4e1e-bcbd-34e56eae457b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084478 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69012085-b35b-4167-aa20-cccec63cdda2-config\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084510 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8307648c-2f7f-4558-aa5f-b629e157221d-audit-policies\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084537 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084567 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-oauth-config\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084595 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b9f7844-b732-44d3-96a3-3cc28364fac8-etcd-client\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084625 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2c4a774-339b-4503-a243-1bf95110d082-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-68zld\" (UID: \"e2c4a774-339b-4503-a243-1bf95110d082\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084682 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xfp8\" (UniqueName: \"kubernetes.io/projected/477b9cdc-eacf-45b7-b79f-dccfe481edc6-kube-api-access-9xfp8\") pod \"openshift-config-operator-7777fb866f-7w462\" (UID: \"477b9cdc-eacf-45b7-b79f-dccfe481edc6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084710 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bcd4770-0856-4233-a401-c8b8a18ec41a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m2qbc\" (UID: \"4bcd4770-0856-4233-a401-c8b8a18ec41a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084734 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084758 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5d3f6c-6b78-44d8-826a-e49742556aaa-config\") pod \"machine-api-operator-5694c8668f-lm8jn\" (UID: \"6f5d3f6c-6b78-44d8-826a-e49742556aaa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084782 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvzl\" (UniqueName: \"kubernetes.io/projected/64860eca-743c-423a-8ee4-a1e5fd4f667d-kube-api-access-rrvzl\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084805 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-service-ca\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084825 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8307648c-2f7f-4558-aa5f-b629e157221d-serving-cert\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084849 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27cxd\" (UniqueName: \"kubernetes.io/projected/b2e4c58d-96fa-407f-9563-99d74e773bac-kube-api-access-27cxd\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084874 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrjsp\" (UniqueName: \"kubernetes.io/projected/a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa-kube-api-access-xrjsp\") pod \"cluster-image-registry-operator-dc59b4c8b-64gm5\" (UID: \"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084896 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084917 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084941 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec973550-1440-4e1e-bcbd-34e56eae457b-machine-approver-tls\") pod \"machine-approver-56656f9798-ztwc4\" (UID: \"ec973550-1440-4e1e-bcbd-34e56eae457b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.084967 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfclf\" (UniqueName: \"kubernetes.io/projected/92749979-4252-4f0e-a763-3db89c2a396c-kube-api-access-sfclf\") pod \"machine-config-operator-74547568cd-bhrfs\" (UID: \"92749979-4252-4f0e-a763-3db89c2a396c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085003 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71a3a0a-18b5-4783-8887-79f76803a121-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dttjb\" (UID: \"d71a3a0a-18b5-4783-8887-79f76803a121\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085034 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffxmm\" (UniqueName: \"kubernetes.io/projected/966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53-kube-api-access-ffxmm\") pod \"dns-operator-744455d44c-wdmm2\" (UID: \"966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53\") " pod="openshift-dns-operator/dns-operator-744455d44c-wdmm2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085060 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/38bafc74-f498-4f4e-9b1d-5fbacfad12e8-proxy-tls\") pod \"machine-config-controller-84d6567774-p24bw\" (UID: \"38bafc74-f498-4f4e-9b1d-5fbacfad12e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085075 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/477b9cdc-eacf-45b7-b79f-dccfe481edc6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7w462\" (UID: \"477b9cdc-eacf-45b7-b79f-dccfe481edc6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085095 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-client-ca\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085219 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69012085-b35b-4167-aa20-cccec63cdda2-serving-cert\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085266 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac6100d3-2668-4b1e-a78a-6f0703eca64a-audit-dir\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085299 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2e4c58d-96fa-407f-9563-99d74e773bac-serving-cert\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085364 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7929a11-35ca-4d0c-9e5b-25c105355711-config\") pod \"kube-apiserver-operator-766d6c64bb-ck7kq\" (UID: \"e7929a11-35ca-4d0c-9e5b-25c105355711\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085399 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53-metrics-tls\") pod \"dns-operator-744455d44c-wdmm2\" (UID: \"966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53\") " pod="openshift-dns-operator/dns-operator-744455d44c-wdmm2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085425 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bcd4770-0856-4233-a401-c8b8a18ec41a-config\") pod \"kube-controller-manager-operator-78b949d7b-m2qbc\" (UID: \"4bcd4770-0856-4233-a401-c8b8a18ec41a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085452 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2e4c58d-96fa-407f-9563-99d74e773bac-etcd-client\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085475 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f5d3f6c-6b78-44d8-826a-e49742556aaa-images\") pod \"machine-api-operator-5694c8668f-lm8jn\" (UID: \"6f5d3f6c-6b78-44d8-826a-e49742556aaa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085533 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-64gm5\" (UID: \"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085563 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/477b9cdc-eacf-45b7-b79f-dccfe481edc6-serving-cert\") pod \"openshift-config-operator-7777fb866f-7w462\" (UID: \"477b9cdc-eacf-45b7-b79f-dccfe481edc6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085596 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwq6m\" (UniqueName: \"kubernetes.io/projected/d71a3a0a-18b5-4783-8887-79f76803a121-kube-api-access-hwq6m\") pod \"openshift-apiserver-operator-796bbdcf4f-dttjb\" (UID: \"d71a3a0a-18b5-4783-8887-79f76803a121\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085622 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8307648c-2f7f-4558-aa5f-b629e157221d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085669 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b2e4c58d-96fa-407f-9563-99d74e773bac-etcd-ca\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085693 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srngn\" (UniqueName: \"kubernetes.io/projected/6f5d3f6c-6b78-44d8-826a-e49742556aaa-kube-api-access-srngn\") pod \"machine-api-operator-5694c8668f-lm8jn\" (UID: \"6f5d3f6c-6b78-44d8-826a-e49742556aaa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085715 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fsjw\" (UniqueName: \"kubernetes.io/projected/e2c4a774-339b-4503-a243-1bf95110d082-kube-api-access-4fsjw\") pod \"package-server-manager-789f6589d5-68zld\" (UID: \"e2c4a774-339b-4503-a243-1bf95110d082\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085739 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92749979-4252-4f0e-a763-3db89c2a396c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bhrfs\" (UID: \"92749979-4252-4f0e-a763-3db89c2a396c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085765 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77fac432-21bd-4251-bb24-320cc71f536c-serving-cert\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085786 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7929a11-35ca-4d0c-9e5b-25c105355711-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ck7kq\" (UID: \"e7929a11-35ca-4d0c-9e5b-25c105355711\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085810 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0452346e-4ae6-4944-8203-fbf3c3273223-default-certificate\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085836 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9388f41-af8d-4194-a8d7-d32733cb786f-trusted-ca\") pod \"ingress-operator-5b745b69d9-n76dh\" (UID: \"c9388f41-af8d-4194-a8d7-d32733cb786f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085859 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82c63f0f-adca-43d1-832d-503873c327c3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-72q7l\" (UID: \"82c63f0f-adca-43d1-832d-503873c327c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085883 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec973550-1440-4e1e-bcbd-34e56eae457b-config\") pod \"machine-approver-56656f9798-ztwc4\" (UID: \"ec973550-1440-4e1e-bcbd-34e56eae457b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085910 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ns4v\" (UniqueName: \"kubernetes.io/projected/a011efc4-8846-45fd-8f1a-27d5907889bf-kube-api-access-8ns4v\") pod \"console-operator-58897d9998-skrqg\" (UID: \"a011efc4-8846-45fd-8f1a-27d5907889bf\") " pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085945 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9f7844-b732-44d3-96a3-3cc28364fac8-serving-cert\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.085975 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bcd4770-0856-4233-a401-c8b8a18ec41a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m2qbc\" (UID: \"4bcd4770-0856-4233-a401-c8b8a18ec41a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086013 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25q7d\" (UniqueName: \"kubernetes.io/projected/bf4310b6-043e-47e5-8519-9a513fb8da48-kube-api-access-25q7d\") pod \"route-controller-manager-6576b87f9c-bw7xx\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086046 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-audit\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086076 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086106 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqqxj\" (UniqueName: \"kubernetes.io/projected/82c63f0f-adca-43d1-832d-503873c327c3-kube-api-access-bqqxj\") pod \"openshift-controller-manager-operator-756b6f6bc6-72q7l\" (UID: \"82c63f0f-adca-43d1-832d-503873c327c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086144 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-serving-cert\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086171 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0452346e-4ae6-4944-8203-fbf3c3273223-service-ca-bundle\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086197 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/92749979-4252-4f0e-a763-3db89c2a396c-proxy-tls\") pod \"machine-config-operator-74547568cd-bhrfs\" (UID: \"92749979-4252-4f0e-a763-3db89c2a396c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086225 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqgkj\" (UniqueName: \"kubernetes.io/projected/38bafc74-f498-4f4e-9b1d-5fbacfad12e8-kube-api-access-vqgkj\") pod \"machine-config-controller-84d6567774-p24bw\" (UID: \"38bafc74-f498-4f4e-9b1d-5fbacfad12e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086252 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-image-import-ca\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086278 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5nks\" (UniqueName: \"kubernetes.io/projected/8307648c-2f7f-4558-aa5f-b629e157221d-kube-api-access-x5nks\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086302 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9388f41-af8d-4194-a8d7-d32733cb786f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n76dh\" (UID: \"c9388f41-af8d-4194-a8d7-d32733cb786f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086335 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-64gm5\" (UID: \"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086363 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-config\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086392 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086421 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkqz\" (UniqueName: \"kubernetes.io/projected/c9388f41-af8d-4194-a8d7-d32733cb786f-kube-api-access-5gkqz\") pod \"ingress-operator-5b745b69d9-n76dh\" (UID: \"c9388f41-af8d-4194-a8d7-d32733cb786f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.086448 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec973550-1440-4e1e-bcbd-34e56eae457b-auth-proxy-config\") pod \"machine-approver-56656f9798-ztwc4\" (UID: \"ec973550-1440-4e1e-bcbd-34e56eae457b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.087660 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.089593 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69012085-b35b-4167-aa20-cccec63cdda2-service-ca-bundle\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.091164 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-service-ca\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.093597 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71a3a0a-18b5-4783-8887-79f76803a121-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dttjb\" (UID: \"d71a3a0a-18b5-4783-8887-79f76803a121\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.094771 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.095764 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.095865 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-64gm5\" (UID: \"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.095869 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf4310b6-043e-47e5-8519-9a513fb8da48-serving-cert\") pod \"route-controller-manager-6576b87f9c-bw7xx\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.097082 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-config\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.097684 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4310b6-043e-47e5-8519-9a513fb8da48-config\") pod \"route-controller-manager-6576b87f9c-bw7xx\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.097746 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-69tth"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.098131 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-config\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.098375 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.098771 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7929a11-35ca-4d0c-9e5b-25c105355711-config\") pod \"kube-apiserver-operator-766d6c64bb-ck7kq\" (UID: \"e7929a11-35ca-4d0c-9e5b-25c105355711\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.100880 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.101558 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.103091 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.103447 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.104074 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf4310b6-043e-47e5-8519-9a513fb8da48-client-ca\") pod \"route-controller-manager-6576b87f9c-bw7xx\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.104510 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69012085-b35b-4167-aa20-cccec63cdda2-config\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.104705 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-audit\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.104847 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b9f7844-b732-44d3-96a3-3cc28364fac8-audit-dir\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.105482 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-etcd-serving-ca\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.107090 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.107471 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.110689 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.110903 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.111697 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-oauth-serving-cert\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.111758 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0b9f7844-b732-44d3-96a3-3cc28364fac8-node-pullsecrets\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.112045 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.112174 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-config\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.112339 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-q9tg7"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.112368 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6fwp2"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.112422 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.112843 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.112851 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.113628 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77fac432-21bd-4251-bb24-320cc71f536c-serving-cert\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.114040 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.114113 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-serving-cert\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.115088 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-t24w4"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.118529 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.118892 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.119660 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.119782 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.119868 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.120062 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.120209 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.120386 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.120394 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.120722 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.120961 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.121305 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.121569 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.121925 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.122052 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.121921 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.122376 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.122480 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.122656 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.123605 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.123772 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.124573 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/477b9cdc-eacf-45b7-b79f-dccfe481edc6-serving-cert\") pod \"openshift-config-operator-7777fb866f-7w462\" (UID: \"477b9cdc-eacf-45b7-b79f-dccfe481edc6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.124807 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.124959 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.124997 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.119837 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.122002 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.126045 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69012085-b35b-4167-aa20-cccec63cdda2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.126252 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.126313 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-client-ca\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.127770 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-64gm5\" (UID: \"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.128239 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-trusted-ca-bundle\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.128953 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69012085-b35b-4167-aa20-cccec63cdda2-serving-cert\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.128953 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b9f7844-b732-44d3-96a3-3cc28364fac8-etcd-client\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.129106 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b9f7844-b732-44d3-96a3-3cc28364fac8-encryption-config\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.129740 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71a3a0a-18b5-4783-8887-79f76803a121-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dttjb\" (UID: \"d71a3a0a-18b5-4783-8887-79f76803a121\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.132993 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-oauth-config\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.133048 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9f7844-b732-44d3-96a3-3cc28364fac8-serving-cert\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.134361 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.134413 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.137395 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.145718 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7929a11-35ca-4d0c-9e5b-25c105355711-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ck7kq\" (UID: \"e7929a11-35ca-4d0c-9e5b-25c105355711\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.148998 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.164264 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.164988 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.165023 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7qhp4"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.166865 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g7767"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.168623 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.192839 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.192955 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7w462"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193687 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0452346e-4ae6-4944-8203-fbf3c3273223-metrics-certs\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193728 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8307648c-2f7f-4558-aa5f-b629e157221d-etcd-client\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193770 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p42vh\" (UniqueName: \"kubernetes.io/projected/8b6c8f8a-62a2-4a40-85f9-2fc713b2822a-kube-api-access-p42vh\") pod \"cluster-samples-operator-665b6dd947-blq8p\" (UID: \"8b6c8f8a-62a2-4a40-85f9-2fc713b2822a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193798 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/02083d27-6ad7-4b28-8226-e9cc75dc55ba-signing-cabundle\") pod \"service-ca-9c57cc56f-shvxg\" (UID: \"02083d27-6ad7-4b28-8226-e9cc75dc55ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193824 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193848 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0452346e-4ae6-4944-8203-fbf3c3273223-stats-auth\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193868 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c9388f41-af8d-4194-a8d7-d32733cb786f-metrics-tls\") pod \"ingress-operator-5b745b69d9-n76dh\" (UID: \"c9388f41-af8d-4194-a8d7-d32733cb786f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193884 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8307648c-2f7f-4558-aa5f-b629e157221d-audit-policies\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193907 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bcd4770-0856-4233-a401-c8b8a18ec41a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m2qbc\" (UID: \"4bcd4770-0856-4233-a401-c8b8a18ec41a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193928 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8mpg\" (UniqueName: \"kubernetes.io/projected/02083d27-6ad7-4b28-8226-e9cc75dc55ba-kube-api-access-k8mpg\") pod \"service-ca-9c57cc56f-shvxg\" (UID: \"02083d27-6ad7-4b28-8226-e9cc75dc55ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193946 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8307648c-2f7f-4558-aa5f-b629e157221d-serving-cert\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193962 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/02083d27-6ad7-4b28-8226-e9cc75dc55ba-signing-key\") pod \"service-ca-9c57cc56f-shvxg\" (UID: \"02083d27-6ad7-4b28-8226-e9cc75dc55ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193978 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.193994 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194012 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec973550-1440-4e1e-bcbd-34e56eae457b-machine-approver-tls\") pod \"machine-approver-56656f9798-ztwc4\" (UID: \"ec973550-1440-4e1e-bcbd-34e56eae457b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194030 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194042 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2e4c58d-96fa-407f-9563-99d74e773bac-serving-cert\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194069 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53-metrics-tls\") pod \"dns-operator-744455d44c-wdmm2\" (UID: \"966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53\") " pod="openshift-dns-operator/dns-operator-744455d44c-wdmm2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194088 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2e4c58d-96fa-407f-9563-99d74e773bac-etcd-client\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194104 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f5d3f6c-6b78-44d8-826a-e49742556aaa-images\") pod \"machine-api-operator-5694c8668f-lm8jn\" (UID: \"6f5d3f6c-6b78-44d8-826a-e49742556aaa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194123 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b2e4c58d-96fa-407f-9563-99d74e773bac-etcd-ca\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194142 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srngn\" (UniqueName: \"kubernetes.io/projected/6f5d3f6c-6b78-44d8-826a-e49742556aaa-kube-api-access-srngn\") pod \"machine-api-operator-5694c8668f-lm8jn\" (UID: \"6f5d3f6c-6b78-44d8-826a-e49742556aaa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194161 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fsjw\" (UniqueName: \"kubernetes.io/projected/e2c4a774-339b-4503-a243-1bf95110d082-kube-api-access-4fsjw\") pod \"package-server-manager-789f6589d5-68zld\" (UID: \"e2c4a774-339b-4503-a243-1bf95110d082\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194184 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92749979-4252-4f0e-a763-3db89c2a396c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bhrfs\" (UID: \"92749979-4252-4f0e-a763-3db89c2a396c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194204 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9388f41-af8d-4194-a8d7-d32733cb786f-trusted-ca\") pod \"ingress-operator-5b745b69d9-n76dh\" (UID: \"c9388f41-af8d-4194-a8d7-d32733cb786f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194223 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82c63f0f-adca-43d1-832d-503873c327c3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-72q7l\" (UID: \"82c63f0f-adca-43d1-832d-503873c327c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194244 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec973550-1440-4e1e-bcbd-34e56eae457b-config\") pod \"machine-approver-56656f9798-ztwc4\" (UID: \"ec973550-1440-4e1e-bcbd-34e56eae457b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194276 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqqxj\" (UniqueName: \"kubernetes.io/projected/82c63f0f-adca-43d1-832d-503873c327c3-kube-api-access-bqqxj\") pod \"openshift-controller-manager-operator-756b6f6bc6-72q7l\" (UID: \"82c63f0f-adca-43d1-832d-503873c327c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194297 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqgkj\" (UniqueName: \"kubernetes.io/projected/38bafc74-f498-4f4e-9b1d-5fbacfad12e8-kube-api-access-vqgkj\") pod \"machine-config-controller-84d6567774-p24bw\" (UID: \"38bafc74-f498-4f4e-9b1d-5fbacfad12e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194326 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gkqz\" (UniqueName: \"kubernetes.io/projected/c9388f41-af8d-4194-a8d7-d32733cb786f-kube-api-access-5gkqz\") pod \"ingress-operator-5b745b69d9-n76dh\" (UID: \"c9388f41-af8d-4194-a8d7-d32733cb786f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194343 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec973550-1440-4e1e-bcbd-34e56eae457b-auth-proxy-config\") pod \"machine-approver-56656f9798-ztwc4\" (UID: \"ec973550-1440-4e1e-bcbd-34e56eae457b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194364 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49ddg\" (UniqueName: \"kubernetes.io/projected/7e02eab6-078e-41f3-b53b-1fd83ce2a730-kube-api-access-49ddg\") pod \"control-plane-machine-set-operator-78cbb6b69f-8zrjj\" (UID: \"7e02eab6-078e-41f3-b53b-1fd83ce2a730\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194397 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e02eab6-078e-41f3-b53b-1fd83ce2a730-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8zrjj\" (UID: \"7e02eab6-078e-41f3-b53b-1fd83ce2a730\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194430 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194459 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194480 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194498 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlgz4\" (UniqueName: \"kubernetes.io/projected/ac6100d3-2668-4b1e-a78a-6f0703eca64a-kube-api-access-jlgz4\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194516 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b2e4c58d-96fa-407f-9563-99d74e773bac-etcd-service-ca\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194533 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a011efc4-8846-45fd-8f1a-27d5907889bf-trusted-ca\") pod \"console-operator-58897d9998-skrqg\" (UID: \"a011efc4-8846-45fd-8f1a-27d5907889bf\") " pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194555 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38bafc74-f498-4f4e-9b1d-5fbacfad12e8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p24bw\" (UID: \"38bafc74-f498-4f4e-9b1d-5fbacfad12e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194570 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8307648c-2f7f-4558-aa5f-b629e157221d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194589 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c63f0f-adca-43d1-832d-503873c327c3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-72q7l\" (UID: \"82c63f0f-adca-43d1-832d-503873c327c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194606 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8307648c-2f7f-4558-aa5f-b629e157221d-encryption-config\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.194622 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.196468 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.197451 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.198992 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec973550-1440-4e1e-bcbd-34e56eae457b-machine-approver-tls\") pod \"machine-approver-56656f9798-ztwc4\" (UID: \"ec973550-1440-4e1e-bcbd-34e56eae457b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.199349 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.199451 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.200177 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec973550-1440-4e1e-bcbd-34e56eae457b-auth-proxy-config\") pod \"machine-approver-56656f9798-ztwc4\" (UID: \"ec973550-1440-4e1e-bcbd-34e56eae457b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.201333 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.201353 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f5d3f6c-6b78-44d8-826a-e49742556aaa-images\") pod \"machine-api-operator-5694c8668f-lm8jn\" (UID: \"6f5d3f6c-6b78-44d8-826a-e49742556aaa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.201753 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/92749979-4252-4f0e-a763-3db89c2a396c-images\") pod \"machine-config-operator-74547568cd-bhrfs\" (UID: \"92749979-4252-4f0e-a763-3db89c2a396c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.201790 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b2e4c58d-96fa-407f-9563-99d74e773bac-etcd-ca\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.201827 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.201867 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e4c58d-96fa-407f-9563-99d74e773bac-config\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.201907 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msqvq\" (UniqueName: \"kubernetes.io/projected/ec973550-1440-4e1e-bcbd-34e56eae457b-kube-api-access-msqvq\") pod \"machine-approver-56656f9798-ztwc4\" (UID: \"ec973550-1440-4e1e-bcbd-34e56eae457b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.201966 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202000 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2c4a774-339b-4503-a243-1bf95110d082-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-68zld\" (UID: \"e2c4a774-339b-4503-a243-1bf95110d082\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202055 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202090 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5d3f6c-6b78-44d8-826a-e49742556aaa-config\") pod \"machine-api-operator-5694c8668f-lm8jn\" (UID: \"6f5d3f6c-6b78-44d8-826a-e49742556aaa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202134 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tk7lm\" (UID: \"30f3fcba-bfce-49ba-90b7-0af1be5c1b61\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202166 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27cxd\" (UniqueName: \"kubernetes.io/projected/b2e4c58d-96fa-407f-9563-99d74e773bac-kube-api-access-27cxd\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202196 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfclf\" (UniqueName: \"kubernetes.io/projected/92749979-4252-4f0e-a763-3db89c2a396c-kube-api-access-sfclf\") pod \"machine-config-operator-74547568cd-bhrfs\" (UID: \"92749979-4252-4f0e-a763-3db89c2a396c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202234 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffxmm\" (UniqueName: \"kubernetes.io/projected/966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53-kube-api-access-ffxmm\") pod \"dns-operator-744455d44c-wdmm2\" (UID: \"966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53\") " pod="openshift-dns-operator/dns-operator-744455d44c-wdmm2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202469 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/38bafc74-f498-4f4e-9b1d-5fbacfad12e8-proxy-tls\") pod \"machine-config-controller-84d6567774-p24bw\" (UID: \"38bafc74-f498-4f4e-9b1d-5fbacfad12e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202500 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac6100d3-2668-4b1e-a78a-6f0703eca64a-audit-dir\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202527 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tk7lm\" (UID: \"30f3fcba-bfce-49ba-90b7-0af1be5c1b61\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202557 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bcd4770-0856-4233-a401-c8b8a18ec41a-config\") pod \"kube-controller-manager-operator-78b949d7b-m2qbc\" (UID: \"4bcd4770-0856-4233-a401-c8b8a18ec41a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202614 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8307648c-2f7f-4558-aa5f-b629e157221d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202667 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ns4v\" (UniqueName: \"kubernetes.io/projected/a011efc4-8846-45fd-8f1a-27d5907889bf-kube-api-access-8ns4v\") pod \"console-operator-58897d9998-skrqg\" (UID: \"a011efc4-8846-45fd-8f1a-27d5907889bf\") " pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202693 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0452346e-4ae6-4944-8203-fbf3c3273223-default-certificate\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202717 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bcd4770-0856-4233-a401-c8b8a18ec41a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m2qbc\" (UID: \"4bcd4770-0856-4233-a401-c8b8a18ec41a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202740 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202777 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0452346e-4ae6-4944-8203-fbf3c3273223-service-ca-bundle\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202808 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/92749979-4252-4f0e-a763-3db89c2a396c-proxy-tls\") pod \"machine-config-operator-74547568cd-bhrfs\" (UID: \"92749979-4252-4f0e-a763-3db89c2a396c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202832 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5nks\" (UniqueName: \"kubernetes.io/projected/8307648c-2f7f-4558-aa5f-b629e157221d-kube-api-access-x5nks\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202857 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9388f41-af8d-4194-a8d7-d32733cb786f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n76dh\" (UID: \"c9388f41-af8d-4194-a8d7-d32733cb786f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.202889 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8307648c-2f7f-4558-aa5f-b629e157221d-audit-dir\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.203166 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-audit-policies\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.203190 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.203357 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a011efc4-8846-45fd-8f1a-27d5907889bf-config\") pod \"console-operator-58897d9998-skrqg\" (UID: \"a011efc4-8846-45fd-8f1a-27d5907889bf\") " pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.203399 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tk7lm\" (UID: \"30f3fcba-bfce-49ba-90b7-0af1be5c1b61\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.203435 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b6c8f8a-62a2-4a40-85f9-2fc713b2822a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-blq8p\" (UID: \"8b6c8f8a-62a2-4a40-85f9-2fc713b2822a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.203464 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8f8k\" (UniqueName: \"kubernetes.io/projected/0452346e-4ae6-4944-8203-fbf3c3273223-kube-api-access-d8f8k\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.203517 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f5d3f6c-6b78-44d8-826a-e49742556aaa-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lm8jn\" (UID: \"6f5d3f6c-6b78-44d8-826a-e49742556aaa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.203546 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a011efc4-8846-45fd-8f1a-27d5907889bf-serving-cert\") pod \"console-operator-58897d9998-skrqg\" (UID: \"a011efc4-8846-45fd-8f1a-27d5907889bf\") " pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.203767 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.205212 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92749979-4252-4f0e-a763-3db89c2a396c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bhrfs\" (UID: \"92749979-4252-4f0e-a763-3db89c2a396c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.205223 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.205981 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bcd4770-0856-4233-a401-c8b8a18ec41a-config\") pod \"kube-controller-manager-operator-78b949d7b-m2qbc\" (UID: \"4bcd4770-0856-4233-a401-c8b8a18ec41a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.206510 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec973550-1440-4e1e-bcbd-34e56eae457b-config\") pod \"machine-approver-56656f9798-ztwc4\" (UID: \"ec973550-1440-4e1e-bcbd-34e56eae457b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.207059 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a011efc4-8846-45fd-8f1a-27d5907889bf-trusted-ca\") pod \"console-operator-58897d9998-skrqg\" (UID: \"a011efc4-8846-45fd-8f1a-27d5907889bf\") " pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.210084 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38bafc74-f498-4f4e-9b1d-5fbacfad12e8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p24bw\" (UID: \"38bafc74-f498-4f4e-9b1d-5fbacfad12e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.211335 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a011efc4-8846-45fd-8f1a-27d5907889bf-config\") pod \"console-operator-58897d9998-skrqg\" (UID: \"a011efc4-8846-45fd-8f1a-27d5907889bf\") " pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.211623 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8307648c-2f7f-4558-aa5f-b629e157221d-audit-dir\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.212152 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/92749979-4252-4f0e-a763-3db89c2a396c-proxy-tls\") pod \"machine-config-operator-74547568cd-bhrfs\" (UID: \"92749979-4252-4f0e-a763-3db89c2a396c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.212200 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-audit-policies\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.212266 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.212831 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5d3f6c-6b78-44d8-826a-e49742556aaa-config\") pod \"machine-api-operator-5694c8668f-lm8jn\" (UID: \"6f5d3f6c-6b78-44d8-826a-e49742556aaa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.212875 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac6100d3-2668-4b1e-a78a-6f0703eca64a-audit-dir\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.213269 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.214019 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b6c8f8a-62a2-4a40-85f9-2fc713b2822a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-blq8p\" (UID: \"8b6c8f8a-62a2-4a40-85f9-2fc713b2822a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.214182 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.214186 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a011efc4-8846-45fd-8f1a-27d5907889bf-serving-cert\") pod \"console-operator-58897d9998-skrqg\" (UID: \"a011efc4-8846-45fd-8f1a-27d5907889bf\") " pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.217939 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.218107 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c63f0f-adca-43d1-832d-503873c327c3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-72q7l\" (UID: \"82c63f0f-adca-43d1-832d-503873c327c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.218117 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bcd4770-0856-4233-a401-c8b8a18ec41a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m2qbc\" (UID: \"4bcd4770-0856-4233-a401-c8b8a18ec41a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.218586 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.219033 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f5d3f6c-6b78-44d8-826a-e49742556aaa-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lm8jn\" (UID: \"6f5d3f6c-6b78-44d8-826a-e49742556aaa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.223677 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.223776 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lm8jn"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.223902 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.224397 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ch4cq"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.224502 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82c63f0f-adca-43d1-832d-503873c327c3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-72q7l\" (UID: \"82c63f0f-adca-43d1-832d-503873c327c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.225707 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-28hfp"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.226668 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.226765 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-28hfp" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.227375 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-skrqg"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.228279 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nzxp7"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.228526 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53-metrics-tls\") pod \"dns-operator-744455d44c-wdmm2\" (UID: \"966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53\") " pod="openshift-dns-operator/dns-operator-744455d44c-wdmm2" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.231271 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.231294 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.231305 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.234691 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.236341 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.236873 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.236869 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/38bafc74-f498-4f4e-9b1d-5fbacfad12e8-proxy-tls\") pod \"machine-config-controller-84d6567774-p24bw\" (UID: \"38bafc74-f498-4f4e-9b1d-5fbacfad12e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.241772 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.246741 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.250170 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2e4c58d-96fa-407f-9563-99d74e773bac-serving-cert\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.252346 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.255159 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-d89bh"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.259045 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.259124 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.259150 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-shvxg"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.259252 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-d89bh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.259829 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.260775 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.261802 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.262841 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-69tth"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.263956 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-f7dzh"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.264504 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2e4c58d-96fa-407f-9563-99d74e773bac-etcd-client\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.264933 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cs42x"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.266760 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.267003 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.268216 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hqqtl"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.269721 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xwgg6"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.269881 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.270345 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xwgg6" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.270481 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.271874 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.272399 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.272905 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9vmtw"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.273938 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wdmm2"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.275052 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hqqtl"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.276122 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-btf7l"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.277027 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.280223 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xwgg6"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.282623 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-28hfp"] Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.282707 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b2e4c58d-96fa-407f-9563-99d74e773bac-etcd-service-ca\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.292550 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0b9f7844-b732-44d3-96a3-3cc28364fac8-image-import-ca\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.293010 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/92749979-4252-4f0e-a763-3db89c2a396c-images\") pod \"machine-config-operator-74547568cd-bhrfs\" (UID: \"92749979-4252-4f0e-a763-3db89c2a396c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.293320 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.305037 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tk7lm\" (UID: \"30f3fcba-bfce-49ba-90b7-0af1be5c1b61\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.305144 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tk7lm\" (UID: \"30f3fcba-bfce-49ba-90b7-0af1be5c1b61\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.305262 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tk7lm\" (UID: \"30f3fcba-bfce-49ba-90b7-0af1be5c1b61\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.305518 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/02083d27-6ad7-4b28-8226-e9cc75dc55ba-signing-cabundle\") pod \"service-ca-9c57cc56f-shvxg\" (UID: \"02083d27-6ad7-4b28-8226-e9cc75dc55ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.305627 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8mpg\" (UniqueName: \"kubernetes.io/projected/02083d27-6ad7-4b28-8226-e9cc75dc55ba-kube-api-access-k8mpg\") pod \"service-ca-9c57cc56f-shvxg\" (UID: \"02083d27-6ad7-4b28-8226-e9cc75dc55ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.305684 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/02083d27-6ad7-4b28-8226-e9cc75dc55ba-signing-key\") pod \"service-ca-9c57cc56f-shvxg\" (UID: \"02083d27-6ad7-4b28-8226-e9cc75dc55ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.305796 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49ddg\" (UniqueName: \"kubernetes.io/projected/7e02eab6-078e-41f3-b53b-1fd83ce2a730-kube-api-access-49ddg\") pod \"control-plane-machine-set-operator-78cbb6b69f-8zrjj\" (UID: \"7e02eab6-078e-41f3-b53b-1fd83ce2a730\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.305837 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e02eab6-078e-41f3-b53b-1fd83ce2a730-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8zrjj\" (UID: \"7e02eab6-078e-41f3-b53b-1fd83ce2a730\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.313336 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.332699 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.334201 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e4c58d-96fa-407f-9563-99d74e773bac-config\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.372803 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.394249 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.412756 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.421309 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c9388f41-af8d-4194-a8d7-d32733cb786f-metrics-tls\") pod \"ingress-operator-5b745b69d9-n76dh\" (UID: \"c9388f41-af8d-4194-a8d7-d32733cb786f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.438728 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.441966 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9388f41-af8d-4194-a8d7-d32733cb786f-trusted-ca\") pod \"ingress-operator-5b745b69d9-n76dh\" (UID: \"c9388f41-af8d-4194-a8d7-d32733cb786f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.453074 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.472758 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.478998 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8307648c-2f7f-4558-aa5f-b629e157221d-etcd-client\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.493378 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.513582 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.532820 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.538501 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8307648c-2f7f-4558-aa5f-b629e157221d-serving-cert\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.552941 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.559270 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8307648c-2f7f-4558-aa5f-b629e157221d-audit-policies\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.572704 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.575497 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8307648c-2f7f-4558-aa5f-b629e157221d-encryption-config\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.593350 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.598048 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8307648c-2f7f-4558-aa5f-b629e157221d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.613219 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.616205 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8307648c-2f7f-4558-aa5f-b629e157221d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.632830 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.654119 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.663229 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.663318 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.673211 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.693076 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.699571 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2c4a774-339b-4503-a243-1bf95110d082-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-68zld\" (UID: \"e2c4a774-339b-4503-a243-1bf95110d082\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.712992 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.732860 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.738733 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0452346e-4ae6-4944-8203-fbf3c3273223-service-ca-bundle\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.753080 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.773391 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.781317 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0452346e-4ae6-4944-8203-fbf3c3273223-default-certificate\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.793395 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.801436 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0452346e-4ae6-4944-8203-fbf3c3273223-stats-auth\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.813203 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.818574 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0452346e-4ae6-4944-8203-fbf3c3273223-metrics-certs\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.832609 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.853498 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.873625 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.892925 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.913612 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.933845 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.953921 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.974308 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 04 04:48:22 crc kubenswrapper[4802]: I1004 04:48:22.998045 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.013214 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.031332 4802 request.go:700] Waited for 1.004118354s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.033857 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.053287 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.073092 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.077308 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/02083d27-6ad7-4b28-8226-e9cc75dc55ba-signing-cabundle\") pod \"service-ca-9c57cc56f-shvxg\" (UID: \"02083d27-6ad7-4b28-8226-e9cc75dc55ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.093517 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.114184 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.120661 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/02083d27-6ad7-4b28-8226-e9cc75dc55ba-signing-key\") pod \"service-ca-9c57cc56f-shvxg\" (UID: \"02083d27-6ad7-4b28-8226-e9cc75dc55ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.133862 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.154400 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.173838 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.194355 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.212934 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.233106 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.253536 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.273609 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.293678 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 04 04:48:23 crc kubenswrapper[4802]: E1004 04:48:23.306206 4802 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Oct 04 04:48:23 crc kubenswrapper[4802]: E1004 04:48:23.306321 4802 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Oct 04 04:48:23 crc kubenswrapper[4802]: E1004 04:48:23.306330 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e02eab6-078e-41f3-b53b-1fd83ce2a730-control-plane-machine-set-operator-tls podName:7e02eab6-078e-41f3-b53b-1fd83ce2a730 nodeName:}" failed. No retries permitted until 2025-10-04 04:48:23.806293586 +0000 UTC m=+146.214294221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e02eab6-078e-41f3-b53b-1fd83ce2a730-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-8zrjj" (UID: "7e02eab6-078e-41f3-b53b-1fd83ce2a730") : failed to sync secret cache: timed out waiting for the condition Oct 04 04:48:23 crc kubenswrapper[4802]: E1004 04:48:23.306503 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-serving-cert podName:30f3fcba-bfce-49ba-90b7-0af1be5c1b61 nodeName:}" failed. No retries permitted until 2025-10-04 04:48:23.806456031 +0000 UTC m=+146.214456666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" (UID: "30f3fcba-bfce-49ba-90b7-0af1be5c1b61") : failed to sync secret cache: timed out waiting for the condition Oct 04 04:48:23 crc kubenswrapper[4802]: E1004 04:48:23.306209 4802 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Oct 04 04:48:23 crc kubenswrapper[4802]: E1004 04:48:23.306559 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-config podName:30f3fcba-bfce-49ba-90b7-0af1be5c1b61 nodeName:}" failed. No retries permitted until 2025-10-04 04:48:23.806547724 +0000 UTC m=+146.214548549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" (UID: "30f3fcba-bfce-49ba-90b7-0af1be5c1b61") : failed to sync configmap cache: timed out waiting for the condition Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.312621 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.334070 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.352711 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.379177 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.393133 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.412972 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.432485 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.452932 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.471885 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.507939 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xfp8\" (UniqueName: \"kubernetes.io/projected/477b9cdc-eacf-45b7-b79f-dccfe481edc6-kube-api-access-9xfp8\") pod \"openshift-config-operator-7777fb866f-7w462\" (UID: \"477b9cdc-eacf-45b7-b79f-dccfe481edc6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.531016 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvzl\" (UniqueName: \"kubernetes.io/projected/64860eca-743c-423a-8ee4-a1e5fd4f667d-kube-api-access-rrvzl\") pod \"console-f9d7485db-6fwp2\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.550687 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjbqt\" (UniqueName: \"kubernetes.io/projected/69012085-b35b-4167-aa20-cccec63cdda2-kube-api-access-zjbqt\") pod \"authentication-operator-69f744f599-t24w4\" (UID: \"69012085-b35b-4167-aa20-cccec63cdda2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.569014 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrjsp\" (UniqueName: \"kubernetes.io/projected/a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa-kube-api-access-xrjsp\") pod \"cluster-image-registry-operator-dc59b4c8b-64gm5\" (UID: \"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.589184 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-984mg\" (UniqueName: \"kubernetes.io/projected/77fac432-21bd-4251-bb24-320cc71f536c-kube-api-access-984mg\") pod \"controller-manager-879f6c89f-g7767\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.592716 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.613327 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.633388 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.652958 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.689425 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2tcf\" (UniqueName: \"kubernetes.io/projected/eee3cf4f-0b25-4641-865e-8f8101256453-kube-api-access-q2tcf\") pod \"downloads-7954f5f757-q9tg7\" (UID: \"eee3cf4f-0b25-4641-865e-8f8101256453\") " pod="openshift-console/downloads-7954f5f757-q9tg7" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.693346 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.696018 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.713158 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.714356 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.733497 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.753523 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.773480 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.802695 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.810618 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25q7d\" (UniqueName: \"kubernetes.io/projected/bf4310b6-043e-47e5-8519-9a513fb8da48-kube-api-access-25q7d\") pod \"route-controller-manager-6576b87f9c-bw7xx\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.828682 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.830495 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tk7lm\" (UID: \"30f3fcba-bfce-49ba-90b7-0af1be5c1b61\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.830712 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e02eab6-078e-41f3-b53b-1fd83ce2a730-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8zrjj\" (UID: \"7e02eab6-078e-41f3-b53b-1fd83ce2a730\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.830773 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tk7lm\" (UID: \"30f3fcba-bfce-49ba-90b7-0af1be5c1b61\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.832027 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tk7lm\" (UID: \"30f3fcba-bfce-49ba-90b7-0af1be5c1b61\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.834162 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.846221 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e02eab6-078e-41f3-b53b-1fd83ce2a730-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8zrjj\" (UID: \"7e02eab6-078e-41f3-b53b-1fd83ce2a730\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.846293 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwq6m\" (UniqueName: \"kubernetes.io/projected/d71a3a0a-18b5-4783-8887-79f76803a121-kube-api-access-hwq6m\") pod \"openshift-apiserver-operator-796bbdcf4f-dttjb\" (UID: \"d71a3a0a-18b5-4783-8887-79f76803a121\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.847636 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tk7lm\" (UID: \"30f3fcba-bfce-49ba-90b7-0af1be5c1b61\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.849491 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.851065 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npm52\" (UniqueName: \"kubernetes.io/projected/0b9f7844-b732-44d3-96a3-3cc28364fac8-kube-api-access-npm52\") pod \"apiserver-76f77b778f-7qhp4\" (UID: \"0b9f7844-b732-44d3-96a3-3cc28364fac8\") " pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.871050 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7929a11-35ca-4d0c-9e5b-25c105355711-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ck7kq\" (UID: \"e7929a11-35ca-4d0c-9e5b-25c105355711\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.888993 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-q9tg7" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.890898 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-64gm5\" (UID: \"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.932258 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6fwp2"] Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.938150 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bcd4770-0856-4233-a401-c8b8a18ec41a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m2qbc\" (UID: \"4bcd4770-0856-4233-a401-c8b8a18ec41a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.942924 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g7767"] Oct 04 04:48:23 crc kubenswrapper[4802]: W1004 04:48:23.946417 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64860eca_743c_423a_8ee4_a1e5fd4f667d.slice/crio-2c860be41e73028e35d7e6d250f91451f7a0ca1090f077940c2f31cd18563b76 WatchSource:0}: Error finding container 2c860be41e73028e35d7e6d250f91451f7a0ca1090f077940c2f31cd18563b76: Status 404 returned error can't find the container with id 2c860be41e73028e35d7e6d250f91451f7a0ca1090f077940c2f31cd18563b76 Oct 04 04:48:23 crc kubenswrapper[4802]: W1004 04:48:23.953233 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77fac432_21bd_4251_bb24_320cc71f536c.slice/crio-f4bd211a9428f906b337dd8269bf7c325e769ee1d2d51df7ce33ecb173b56f7f WatchSource:0}: Error finding container f4bd211a9428f906b337dd8269bf7c325e769ee1d2d51df7ce33ecb173b56f7f: Status 404 returned error can't find the container with id f4bd211a9428f906b337dd8269bf7c325e769ee1d2d51df7ce33ecb173b56f7f Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.954553 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.960846 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p42vh\" (UniqueName: \"kubernetes.io/projected/8b6c8f8a-62a2-4a40-85f9-2fc713b2822a-kube-api-access-p42vh\") pod \"cluster-samples-operator-665b6dd947-blq8p\" (UID: \"8b6c8f8a-62a2-4a40-85f9-2fc713b2822a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p" Oct 04 04:48:23 crc kubenswrapper[4802]: I1004 04:48:23.978547 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fsjw\" (UniqueName: \"kubernetes.io/projected/e2c4a774-339b-4503-a243-1bf95110d082-kube-api-access-4fsjw\") pod \"package-server-manager-789f6589d5-68zld\" (UID: \"e2c4a774-339b-4503-a243-1bf95110d082\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.000594 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srngn\" (UniqueName: \"kubernetes.io/projected/6f5d3f6c-6b78-44d8-826a-e49742556aaa-kube-api-access-srngn\") pod \"machine-api-operator-5694c8668f-lm8jn\" (UID: \"6f5d3f6c-6b78-44d8-826a-e49742556aaa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.017226 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqgkj\" (UniqueName: \"kubernetes.io/projected/38bafc74-f498-4f4e-9b1d-5fbacfad12e8-kube-api-access-vqgkj\") pod \"machine-config-controller-84d6567774-p24bw\" (UID: \"38bafc74-f498-4f4e-9b1d-5fbacfad12e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.031544 4802 request.go:700] Waited for 1.825364347s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.032148 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlgz4\" (UniqueName: \"kubernetes.io/projected/ac6100d3-2668-4b1e-a78a-6f0703eca64a-kube-api-access-jlgz4\") pod \"oauth-openshift-558db77b4-ch4cq\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.052360 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqqxj\" (UniqueName: \"kubernetes.io/projected/82c63f0f-adca-43d1-832d-503873c327c3-kube-api-access-bqqxj\") pod \"openshift-controller-manager-operator-756b6f6bc6-72q7l\" (UID: \"82c63f0f-adca-43d1-832d-503873c327c3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.055567 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.063172 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.068754 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.069741 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ns4v\" (UniqueName: \"kubernetes.io/projected/a011efc4-8846-45fd-8f1a-27d5907889bf-kube-api-access-8ns4v\") pod \"console-operator-58897d9998-skrqg\" (UID: \"a011efc4-8846-45fd-8f1a-27d5907889bf\") " pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.077912 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7w462"] Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.078202 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.078357 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.091781 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gkqz\" (UniqueName: \"kubernetes.io/projected/c9388f41-af8d-4194-a8d7-d32733cb786f-kube-api-access-5gkqz\") pod \"ingress-operator-5b745b69d9-n76dh\" (UID: \"c9388f41-af8d-4194-a8d7-d32733cb786f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.092342 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.112100 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5nks\" (UniqueName: \"kubernetes.io/projected/8307648c-2f7f-4558-aa5f-b629e157221d-kube-api-access-x5nks\") pod \"apiserver-7bbb656c7d-z72dq\" (UID: \"8307648c-2f7f-4558-aa5f-b629e157221d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.125924 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.130668 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.140744 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8f8k\" (UniqueName: \"kubernetes.io/projected/0452346e-4ae6-4944-8203-fbf3c3273223-kube-api-access-d8f8k\") pod \"router-default-5444994796-kcw54\" (UID: \"0452346e-4ae6-4944-8203-fbf3c3273223\") " pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.174140 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9388f41-af8d-4194-a8d7-d32733cb786f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n76dh\" (UID: \"c9388f41-af8d-4194-a8d7-d32733cb786f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.181493 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffxmm\" (UniqueName: \"kubernetes.io/projected/966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53-kube-api-access-ffxmm\") pod \"dns-operator-744455d44c-wdmm2\" (UID: \"966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53\") " pod="openshift-dns-operator/dns-operator-744455d44c-wdmm2" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.211775 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-q9tg7"] Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.215366 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27cxd\" (UniqueName: \"kubernetes.io/projected/b2e4c58d-96fa-407f-9563-99d74e773bac-kube-api-access-27cxd\") pod \"etcd-operator-b45778765-f7dzh\" (UID: \"b2e4c58d-96fa-407f-9563-99d74e773bac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.217881 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6fwp2" event={"ID":"64860eca-743c-423a-8ee4-a1e5fd4f667d","Type":"ContainerStarted","Data":"2c860be41e73028e35d7e6d250f91451f7a0ca1090f077940c2f31cd18563b76"} Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.218369 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-t24w4"] Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.223451 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" event={"ID":"477b9cdc-eacf-45b7-b79f-dccfe481edc6","Type":"ContainerStarted","Data":"92fb66238d350507034c0b6efc568b25d219059272af4b3790c219a27c9bd320"} Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.225656 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfclf\" (UniqueName: \"kubernetes.io/projected/92749979-4252-4f0e-a763-3db89c2a396c-kube-api-access-sfclf\") pod \"machine-config-operator-74547568cd-bhrfs\" (UID: \"92749979-4252-4f0e-a763-3db89c2a396c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.239567 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.248007 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msqvq\" (UniqueName: \"kubernetes.io/projected/ec973550-1440-4e1e-bcbd-34e56eae457b-kube-api-access-msqvq\") pod \"machine-approver-56656f9798-ztwc4\" (UID: \"ec973550-1440-4e1e-bcbd-34e56eae457b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.259008 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.262084 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" event={"ID":"77fac432-21bd-4251-bb24-320cc71f536c","Type":"ContainerStarted","Data":"f4bd211a9428f906b337dd8269bf7c325e769ee1d2d51df7ce33ecb173b56f7f"} Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.273519 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.293176 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.305332 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.307686 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb"] Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.314903 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.324494 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.335480 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lm8jn"] Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.336540 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.339614 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.353616 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.371805 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.372968 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.386381 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.395664 4802 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.398920 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wdmm2" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.407681 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.413103 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.414166 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.417161 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx"] Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.436211 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.443314 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.454921 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.473600 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 04 04:48:24 crc kubenswrapper[4802]: W1004 04:48:24.500012 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf4310b6_043e_47e5_8519_9a513fb8da48.slice/crio-eb0760814014619d04efbff99031f9fec20c5878499cb6b84921937c76542282 WatchSource:0}: Error finding container eb0760814014619d04efbff99031f9fec20c5878499cb6b84921937c76542282: Status 404 returned error can't find the container with id eb0760814014619d04efbff99031f9fec20c5878499cb6b84921937c76542282 Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.517291 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30f3fcba-bfce-49ba-90b7-0af1be5c1b61-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tk7lm\" (UID: \"30f3fcba-bfce-49ba-90b7-0af1be5c1b61\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.532110 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8mpg\" (UniqueName: \"kubernetes.io/projected/02083d27-6ad7-4b28-8226-e9cc75dc55ba-kube-api-access-k8mpg\") pod \"service-ca-9c57cc56f-shvxg\" (UID: \"02083d27-6ad7-4b28-8226-e9cc75dc55ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.534517 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.569053 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49ddg\" (UniqueName: \"kubernetes.io/projected/7e02eab6-078e-41f3-b53b-1fd83ce2a730-kube-api-access-49ddg\") pod \"control-plane-machine-set-operator-78cbb6b69f-8zrjj\" (UID: \"7e02eab6-078e-41f3-b53b-1fd83ce2a730\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.648970 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq"] Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.651235 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17e46aa6-0f6b-4269-b211-5e674bc461f1-serving-cert\") pod \"service-ca-operator-777779d784-btf7l\" (UID: \"17e46aa6-0f6b-4269-b211-5e674bc461f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.651283 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b442b269-e453-4dc0-853b-7753bc0704a0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lmfqg\" (UID: \"b442b269-e453-4dc0-853b-7753bc0704a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.651333 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc36cbe1-f043-49df-bb90-158d61ac67ad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cs42x\" (UID: \"cc36cbe1-f043-49df-bb90-158d61ac67ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.651376 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-bound-sa-token\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.651447 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/05d0077a-01e0-4674-ace4-775828fa38ec-srv-cert\") pod \"olm-operator-6b444d44fb-6gzrh\" (UID: \"05d0077a-01e0-4674-ace4-775828fa38ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.651537 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctbxl\" (UniqueName: \"kubernetes.io/projected/89806d38-9d4c-4b53-b6b2-8e95599f16cc-kube-api-access-ctbxl\") pod \"migrator-59844c95c7-9vmtw\" (UID: \"89806d38-9d4c-4b53-b6b2-8e95599f16cc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vmtw" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.651594 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b442b269-e453-4dc0-853b-7753bc0704a0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lmfqg\" (UID: \"b442b269-e453-4dc0-853b-7753bc0704a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.651683 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pxnn\" (UniqueName: \"kubernetes.io/projected/d7b15b12-47c1-4b49-8851-4e01097927d8-kube-api-access-2pxnn\") pod \"collect-profiles-29325885-fq6vk\" (UID: \"d7b15b12-47c1-4b49-8851-4e01097927d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.651741 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3-webhook-cert\") pod \"packageserver-d55dfcdfc-tv9vp\" (UID: \"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.651889 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggmqx\" (UniqueName: \"kubernetes.io/projected/05d0077a-01e0-4674-ace4-775828fa38ec-kube-api-access-ggmqx\") pod \"olm-operator-6b444d44fb-6gzrh\" (UID: \"05d0077a-01e0-4674-ace4-775828fa38ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.651959 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17e46aa6-0f6b-4269-b211-5e674bc461f1-config\") pod \"service-ca-operator-777779d784-btf7l\" (UID: \"17e46aa6-0f6b-4269-b211-5e674bc461f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.651992 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3-apiservice-cert\") pod \"packageserver-d55dfcdfc-tv9vp\" (UID: \"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652091 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/049a575e-6351-4aa3-89b0-395dd5dc7af5-trusted-ca\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652118 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmsh2\" (UniqueName: \"kubernetes.io/projected/7cf9948f-b0a7-414a-8d9c-79c8b6235799-kube-api-access-xmsh2\") pod \"multus-admission-controller-857f4d67dd-nzxp7\" (UID: \"7cf9948f-b0a7-414a-8d9c-79c8b6235799\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzxp7" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652152 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/049a575e-6351-4aa3-89b0-395dd5dc7af5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652169 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3-tmpfs\") pod \"packageserver-d55dfcdfc-tv9vp\" (UID: \"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652232 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwkdz\" (UniqueName: \"kubernetes.io/projected/b442b269-e453-4dc0-853b-7753bc0704a0-kube-api-access-mwkdz\") pod \"kube-storage-version-migrator-operator-b67b599dd-lmfqg\" (UID: \"b442b269-e453-4dc0-853b-7753bc0704a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652269 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/049a575e-6351-4aa3-89b0-395dd5dc7af5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652326 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cc36cbe1-f043-49df-bb90-158d61ac67ad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cs42x\" (UID: \"cc36cbe1-f043-49df-bb90-158d61ac67ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652400 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7b15b12-47c1-4b49-8851-4e01097927d8-config-volume\") pod \"collect-profiles-29325885-fq6vk\" (UID: \"d7b15b12-47c1-4b49-8851-4e01097927d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652420 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktt84\" (UniqueName: \"kubernetes.io/projected/17e46aa6-0f6b-4269-b211-5e674bc461f1-kube-api-access-ktt84\") pod \"service-ca-operator-777779d784-btf7l\" (UID: \"17e46aa6-0f6b-4269-b211-5e674bc461f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652489 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7b15b12-47c1-4b49-8851-4e01097927d8-secret-volume\") pod \"collect-profiles-29325885-fq6vk\" (UID: \"d7b15b12-47c1-4b49-8851-4e01097927d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652546 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/049a575e-6351-4aa3-89b0-395dd5dc7af5-registry-certificates\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652624 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44b8fdcf-0cac-45e3-9899-098633c7e336-profile-collector-cert\") pod \"catalog-operator-68c6474976-8582w\" (UID: \"44b8fdcf-0cac-45e3-9899-098633c7e336\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652680 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44b8fdcf-0cac-45e3-9899-098633c7e336-srv-cert\") pod \"catalog-operator-68c6474976-8582w\" (UID: \"44b8fdcf-0cac-45e3-9899-098633c7e336\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652704 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6pkr\" (UniqueName: \"kubernetes.io/projected/44b8fdcf-0cac-45e3-9899-098633c7e336-kube-api-access-s6pkr\") pod \"catalog-operator-68c6474976-8582w\" (UID: \"44b8fdcf-0cac-45e3-9899-098633c7e336\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652740 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7cf9948f-b0a7-414a-8d9c-79c8b6235799-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nzxp7\" (UID: \"7cf9948f-b0a7-414a-8d9c-79c8b6235799\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzxp7" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652775 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/05d0077a-01e0-4674-ace4-775828fa38ec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6gzrh\" (UID: \"05d0077a-01e0-4674-ace4-775828fa38ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.652798 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfpk5\" (UniqueName: \"kubernetes.io/projected/6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3-kube-api-access-zfpk5\") pod \"packageserver-d55dfcdfc-tv9vp\" (UID: \"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.653955 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54xrd\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-kube-api-access-54xrd\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.654032 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbf84\" (UniqueName: \"kubernetes.io/projected/cc36cbe1-f043-49df-bb90-158d61ac67ad-kube-api-access-jbf84\") pod \"marketplace-operator-79b997595-cs42x\" (UID: \"cc36cbe1-f043-49df-bb90-158d61ac67ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.654104 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.654132 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-registry-tls\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: E1004 04:48:24.655788 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:25.155771902 +0000 UTC m=+147.563772527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.672511 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc"] Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.684413 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5"] Oct 04 04:48:24 crc kubenswrapper[4802]: W1004 04:48:24.695210 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7929a11_35ca_4d0c_9e5b_25c105355711.slice/crio-b98644e74aaccc58a901154d14e9ed164cf947f875c6bb6db6eb0bbb19427a4e WatchSource:0}: Error finding container b98644e74aaccc58a901154d14e9ed164cf947f875c6bb6db6eb0bbb19427a4e: Status 404 returned error can't find the container with id b98644e74aaccc58a901154d14e9ed164cf947f875c6bb6db6eb0bbb19427a4e Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.750624 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld"] Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.753189 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7qhp4"] Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.755319 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:24 crc kubenswrapper[4802]: E1004 04:48:24.755572 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:25.255535637 +0000 UTC m=+147.663536262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.755630 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbf84\" (UniqueName: \"kubernetes.io/projected/cc36cbe1-f043-49df-bb90-158d61ac67ad-kube-api-access-jbf84\") pod \"marketplace-operator-79b997595-cs42x\" (UID: \"cc36cbe1-f043-49df-bb90-158d61ac67ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.755740 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.755771 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npptv\" (UniqueName: \"kubernetes.io/projected/e2fdb3b2-9c9f-4667-9957-b64141823d4f-kube-api-access-npptv\") pod \"dns-default-28hfp\" (UID: \"e2fdb3b2-9c9f-4667-9957-b64141823d4f\") " pod="openshift-dns/dns-default-28hfp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.755806 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-registry-tls\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.755847 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17e46aa6-0f6b-4269-b211-5e674bc461f1-serving-cert\") pod \"service-ca-operator-777779d784-btf7l\" (UID: \"17e46aa6-0f6b-4269-b211-5e674bc461f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.755871 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b442b269-e453-4dc0-853b-7753bc0704a0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lmfqg\" (UID: \"b442b269-e453-4dc0-853b-7753bc0704a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.755931 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2720405-7826-4e0e-8fd2-68333d982ce4-cert\") pod \"ingress-canary-xwgg6\" (UID: \"a2720405-7826-4e0e-8fd2-68333d982ce4\") " pod="openshift-ingress-canary/ingress-canary-xwgg6" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756001 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc36cbe1-f043-49df-bb90-158d61ac67ad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cs42x\" (UID: \"cc36cbe1-f043-49df-bb90-158d61ac67ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756033 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-registration-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756065 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-bound-sa-token\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756101 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/05d0077a-01e0-4674-ace4-775828fa38ec-srv-cert\") pod \"olm-operator-6b444d44fb-6gzrh\" (UID: \"05d0077a-01e0-4674-ace4-775828fa38ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756125 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctbxl\" (UniqueName: \"kubernetes.io/projected/89806d38-9d4c-4b53-b6b2-8e95599f16cc-kube-api-access-ctbxl\") pod \"migrator-59844c95c7-9vmtw\" (UID: \"89806d38-9d4c-4b53-b6b2-8e95599f16cc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vmtw" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756153 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b442b269-e453-4dc0-853b-7753bc0704a0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lmfqg\" (UID: \"b442b269-e453-4dc0-853b-7753bc0704a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756175 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pxnn\" (UniqueName: \"kubernetes.io/projected/d7b15b12-47c1-4b49-8851-4e01097927d8-kube-api-access-2pxnn\") pod \"collect-profiles-29325885-fq6vk\" (UID: \"d7b15b12-47c1-4b49-8851-4e01097927d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" Oct 04 04:48:24 crc kubenswrapper[4802]: E1004 04:48:24.756188 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:25.256180278 +0000 UTC m=+147.664180903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756241 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3-webhook-cert\") pod \"packageserver-d55dfcdfc-tv9vp\" (UID: \"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756285 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-mountpoint-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756335 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-plugins-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756354 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0d7c1e6b-fe65-4286-9d05-1f58e8708707-certs\") pod \"machine-config-server-d89bh\" (UID: \"0d7c1e6b-fe65-4286-9d05-1f58e8708707\") " pod="openshift-machine-config-operator/machine-config-server-d89bh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756404 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggmqx\" (UniqueName: \"kubernetes.io/projected/05d0077a-01e0-4674-ace4-775828fa38ec-kube-api-access-ggmqx\") pod \"olm-operator-6b444d44fb-6gzrh\" (UID: \"05d0077a-01e0-4674-ace4-775828fa38ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756429 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17e46aa6-0f6b-4269-b211-5e674bc461f1-config\") pod \"service-ca-operator-777779d784-btf7l\" (UID: \"17e46aa6-0f6b-4269-b211-5e674bc461f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756450 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e2fdb3b2-9c9f-4667-9957-b64141823d4f-metrics-tls\") pod \"dns-default-28hfp\" (UID: \"e2fdb3b2-9c9f-4667-9957-b64141823d4f\") " pod="openshift-dns/dns-default-28hfp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756479 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-socket-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756506 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3-apiservice-cert\") pod \"packageserver-d55dfcdfc-tv9vp\" (UID: \"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756557 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/049a575e-6351-4aa3-89b0-395dd5dc7af5-trusted-ca\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756594 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmsh2\" (UniqueName: \"kubernetes.io/projected/7cf9948f-b0a7-414a-8d9c-79c8b6235799-kube-api-access-xmsh2\") pod \"multus-admission-controller-857f4d67dd-nzxp7\" (UID: \"7cf9948f-b0a7-414a-8d9c-79c8b6235799\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzxp7" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756612 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3-tmpfs\") pod \"packageserver-d55dfcdfc-tv9vp\" (UID: \"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756637 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/049a575e-6351-4aa3-89b0-395dd5dc7af5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756729 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5rd8\" (UniqueName: \"kubernetes.io/projected/a2720405-7826-4e0e-8fd2-68333d982ce4-kube-api-access-g5rd8\") pod \"ingress-canary-xwgg6\" (UID: \"a2720405-7826-4e0e-8fd2-68333d982ce4\") " pod="openshift-ingress-canary/ingress-canary-xwgg6" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756766 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwkdz\" (UniqueName: \"kubernetes.io/projected/b442b269-e453-4dc0-853b-7753bc0704a0-kube-api-access-mwkdz\") pod \"kube-storage-version-migrator-operator-b67b599dd-lmfqg\" (UID: \"b442b269-e453-4dc0-853b-7753bc0704a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756800 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/049a575e-6351-4aa3-89b0-395dd5dc7af5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756822 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts9sn\" (UniqueName: \"kubernetes.io/projected/0d7c1e6b-fe65-4286-9d05-1f58e8708707-kube-api-access-ts9sn\") pod \"machine-config-server-d89bh\" (UID: \"0d7c1e6b-fe65-4286-9d05-1f58e8708707\") " pod="openshift-machine-config-operator/machine-config-server-d89bh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756839 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2fjt\" (UniqueName: \"kubernetes.io/projected/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-kube-api-access-d2fjt\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756862 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cc36cbe1-f043-49df-bb90-158d61ac67ad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cs42x\" (UID: \"cc36cbe1-f043-49df-bb90-158d61ac67ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756890 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-csi-data-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756948 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktt84\" (UniqueName: \"kubernetes.io/projected/17e46aa6-0f6b-4269-b211-5e674bc461f1-kube-api-access-ktt84\") pod \"service-ca-operator-777779d784-btf7l\" (UID: \"17e46aa6-0f6b-4269-b211-5e674bc461f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.756965 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2fdb3b2-9c9f-4667-9957-b64141823d4f-config-volume\") pod \"dns-default-28hfp\" (UID: \"e2fdb3b2-9c9f-4667-9957-b64141823d4f\") " pod="openshift-dns/dns-default-28hfp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.757018 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7b15b12-47c1-4b49-8851-4e01097927d8-config-volume\") pod \"collect-profiles-29325885-fq6vk\" (UID: \"d7b15b12-47c1-4b49-8851-4e01097927d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.757080 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7b15b12-47c1-4b49-8851-4e01097927d8-secret-volume\") pod \"collect-profiles-29325885-fq6vk\" (UID: \"d7b15b12-47c1-4b49-8851-4e01097927d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.757115 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/049a575e-6351-4aa3-89b0-395dd5dc7af5-registry-certificates\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.757149 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44b8fdcf-0cac-45e3-9899-098633c7e336-profile-collector-cert\") pod \"catalog-operator-68c6474976-8582w\" (UID: \"44b8fdcf-0cac-45e3-9899-098633c7e336\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.757186 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44b8fdcf-0cac-45e3-9899-098633c7e336-srv-cert\") pod \"catalog-operator-68c6474976-8582w\" (UID: \"44b8fdcf-0cac-45e3-9899-098633c7e336\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.757207 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6pkr\" (UniqueName: \"kubernetes.io/projected/44b8fdcf-0cac-45e3-9899-098633c7e336-kube-api-access-s6pkr\") pod \"catalog-operator-68c6474976-8582w\" (UID: \"44b8fdcf-0cac-45e3-9899-098633c7e336\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.757230 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7cf9948f-b0a7-414a-8d9c-79c8b6235799-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nzxp7\" (UID: \"7cf9948f-b0a7-414a-8d9c-79c8b6235799\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzxp7" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.757249 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0d7c1e6b-fe65-4286-9d05-1f58e8708707-node-bootstrap-token\") pod \"machine-config-server-d89bh\" (UID: \"0d7c1e6b-fe65-4286-9d05-1f58e8708707\") " pod="openshift-machine-config-operator/machine-config-server-d89bh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.757273 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/05d0077a-01e0-4674-ace4-775828fa38ec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6gzrh\" (UID: \"05d0077a-01e0-4674-ace4-775828fa38ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.757313 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfpk5\" (UniqueName: \"kubernetes.io/projected/6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3-kube-api-access-zfpk5\") pod \"packageserver-d55dfcdfc-tv9vp\" (UID: \"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.757408 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54xrd\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-kube-api-access-54xrd\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.760285 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw"] Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.764519 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/049a575e-6351-4aa3-89b0-395dd5dc7af5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.766267 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7b15b12-47c1-4b49-8851-4e01097927d8-config-volume\") pod \"collect-profiles-29325885-fq6vk\" (UID: \"d7b15b12-47c1-4b49-8851-4e01097927d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.767581 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17e46aa6-0f6b-4269-b211-5e674bc461f1-config\") pod \"service-ca-operator-777779d784-btf7l\" (UID: \"17e46aa6-0f6b-4269-b211-5e674bc461f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.768619 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3-tmpfs\") pod \"packageserver-d55dfcdfc-tv9vp\" (UID: \"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.769030 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/049a575e-6351-4aa3-89b0-395dd5dc7af5-registry-certificates\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.769241 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b442b269-e453-4dc0-853b-7753bc0704a0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lmfqg\" (UID: \"b442b269-e453-4dc0-853b-7753bc0704a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.771818 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/049a575e-6351-4aa3-89b0-395dd5dc7af5-trusted-ca\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.773688 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc36cbe1-f043-49df-bb90-158d61ac67ad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cs42x\" (UID: \"cc36cbe1-f043-49df-bb90-158d61ac67ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.777774 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq"] Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.780615 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3-webhook-cert\") pod \"packageserver-d55dfcdfc-tv9vp\" (UID: \"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.781106 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7cf9948f-b0a7-414a-8d9c-79c8b6235799-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nzxp7\" (UID: \"7cf9948f-b0a7-414a-8d9c-79c8b6235799\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzxp7" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.782921 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.785461 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/05d0077a-01e0-4674-ace4-775828fa38ec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6gzrh\" (UID: \"05d0077a-01e0-4674-ace4-775828fa38ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.788502 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/05d0077a-01e0-4674-ace4-775828fa38ec-srv-cert\") pod \"olm-operator-6b444d44fb-6gzrh\" (UID: \"05d0077a-01e0-4674-ace4-775828fa38ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.789044 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-registry-tls\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.789473 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cc36cbe1-f043-49df-bb90-158d61ac67ad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cs42x\" (UID: \"cc36cbe1-f043-49df-bb90-158d61ac67ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.790261 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44b8fdcf-0cac-45e3-9899-098633c7e336-srv-cert\") pod \"catalog-operator-68c6474976-8582w\" (UID: \"44b8fdcf-0cac-45e3-9899-098633c7e336\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.790393 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/049a575e-6351-4aa3-89b0-395dd5dc7af5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.790928 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b442b269-e453-4dc0-853b-7753bc0704a0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lmfqg\" (UID: \"b442b269-e453-4dc0-853b-7753bc0704a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.791359 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44b8fdcf-0cac-45e3-9899-098633c7e336-profile-collector-cert\") pod \"catalog-operator-68c6474976-8582w\" (UID: \"44b8fdcf-0cac-45e3-9899-098633c7e336\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.791859 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7b15b12-47c1-4b49-8851-4e01097927d8-secret-volume\") pod \"collect-profiles-29325885-fq6vk\" (UID: \"d7b15b12-47c1-4b49-8851-4e01097927d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.795013 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3-apiservice-cert\") pod \"packageserver-d55dfcdfc-tv9vp\" (UID: \"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.797726 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pxnn\" (UniqueName: \"kubernetes.io/projected/d7b15b12-47c1-4b49-8851-4e01097927d8-kube-api-access-2pxnn\") pod \"collect-profiles-29325885-fq6vk\" (UID: \"d7b15b12-47c1-4b49-8851-4e01097927d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.809118 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54xrd\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-kube-api-access-54xrd\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.819314 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.830711 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p"] Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.856493 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.858751 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859014 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5rd8\" (UniqueName: \"kubernetes.io/projected/a2720405-7826-4e0e-8fd2-68333d982ce4-kube-api-access-g5rd8\") pod \"ingress-canary-xwgg6\" (UID: \"a2720405-7826-4e0e-8fd2-68333d982ce4\") " pod="openshift-ingress-canary/ingress-canary-xwgg6" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859050 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts9sn\" (UniqueName: \"kubernetes.io/projected/0d7c1e6b-fe65-4286-9d05-1f58e8708707-kube-api-access-ts9sn\") pod \"machine-config-server-d89bh\" (UID: \"0d7c1e6b-fe65-4286-9d05-1f58e8708707\") " pod="openshift-machine-config-operator/machine-config-server-d89bh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859068 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2fjt\" (UniqueName: \"kubernetes.io/projected/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-kube-api-access-d2fjt\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859085 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-csi-data-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859119 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2fdb3b2-9c9f-4667-9957-b64141823d4f-config-volume\") pod \"dns-default-28hfp\" (UID: \"e2fdb3b2-9c9f-4667-9957-b64141823d4f\") " pod="openshift-dns/dns-default-28hfp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859152 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0d7c1e6b-fe65-4286-9d05-1f58e8708707-node-bootstrap-token\") pod \"machine-config-server-d89bh\" (UID: \"0d7c1e6b-fe65-4286-9d05-1f58e8708707\") " pod="openshift-machine-config-operator/machine-config-server-d89bh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859196 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npptv\" (UniqueName: \"kubernetes.io/projected/e2fdb3b2-9c9f-4667-9957-b64141823d4f-kube-api-access-npptv\") pod \"dns-default-28hfp\" (UID: \"e2fdb3b2-9c9f-4667-9957-b64141823d4f\") " pod="openshift-dns/dns-default-28hfp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859223 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2720405-7826-4e0e-8fd2-68333d982ce4-cert\") pod \"ingress-canary-xwgg6\" (UID: \"a2720405-7826-4e0e-8fd2-68333d982ce4\") " pod="openshift-ingress-canary/ingress-canary-xwgg6" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859245 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-registration-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859285 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-mountpoint-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859314 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-plugins-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859329 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0d7c1e6b-fe65-4286-9d05-1f58e8708707-certs\") pod \"machine-config-server-d89bh\" (UID: \"0d7c1e6b-fe65-4286-9d05-1f58e8708707\") " pod="openshift-machine-config-operator/machine-config-server-d89bh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859351 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e2fdb3b2-9c9f-4667-9957-b64141823d4f-metrics-tls\") pod \"dns-default-28hfp\" (UID: \"e2fdb3b2-9c9f-4667-9957-b64141823d4f\") " pod="openshift-dns/dns-default-28hfp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859367 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-socket-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859446 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbf84\" (UniqueName: \"kubernetes.io/projected/cc36cbe1-f043-49df-bb90-158d61ac67ad-kube-api-access-jbf84\") pod \"marketplace-operator-79b997595-cs42x\" (UID: \"cc36cbe1-f043-49df-bb90-158d61ac67ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.859697 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-socket-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.860426 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-plugins-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.860524 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-registration-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.860567 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-mountpoint-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: E1004 04:48:24.860583 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:25.360550035 +0000 UTC m=+147.768550660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.861218 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-csi-data-dir\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.861536 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2fdb3b2-9c9f-4667-9957-b64141823d4f-config-volume\") pod \"dns-default-28hfp\" (UID: \"e2fdb3b2-9c9f-4667-9957-b64141823d4f\") " pod="openshift-dns/dns-default-28hfp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.865151 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e2fdb3b2-9c9f-4667-9957-b64141823d4f-metrics-tls\") pod \"dns-default-28hfp\" (UID: \"e2fdb3b2-9c9f-4667-9957-b64141823d4f\") " pod="openshift-dns/dns-default-28hfp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.865475 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0d7c1e6b-fe65-4286-9d05-1f58e8708707-certs\") pod \"machine-config-server-d89bh\" (UID: \"0d7c1e6b-fe65-4286-9d05-1f58e8708707\") " pod="openshift-machine-config-operator/machine-config-server-d89bh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.868392 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctbxl\" (UniqueName: \"kubernetes.io/projected/89806d38-9d4c-4b53-b6b2-8e95599f16cc-kube-api-access-ctbxl\") pod \"migrator-59844c95c7-9vmtw\" (UID: \"89806d38-9d4c-4b53-b6b2-8e95599f16cc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vmtw" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.869121 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a2720405-7826-4e0e-8fd2-68333d982ce4-cert\") pod \"ingress-canary-xwgg6\" (UID: \"a2720405-7826-4e0e-8fd2-68333d982ce4\") " pod="openshift-ingress-canary/ingress-canary-xwgg6" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.869749 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0d7c1e6b-fe65-4286-9d05-1f58e8708707-node-bootstrap-token\") pod \"machine-config-server-d89bh\" (UID: \"0d7c1e6b-fe65-4286-9d05-1f58e8708707\") " pod="openshift-machine-config-operator/machine-config-server-d89bh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.875470 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17e46aa6-0f6b-4269-b211-5e674bc461f1-serving-cert\") pod \"service-ca-operator-777779d784-btf7l\" (UID: \"17e46aa6-0f6b-4269-b211-5e674bc461f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.888185 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwkdz\" (UniqueName: \"kubernetes.io/projected/b442b269-e453-4dc0-853b-7753bc0704a0-kube-api-access-mwkdz\") pod \"kube-storage-version-migrator-operator-b67b599dd-lmfqg\" (UID: \"b442b269-e453-4dc0-853b-7753bc0704a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.913006 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktt84\" (UniqueName: \"kubernetes.io/projected/17e46aa6-0f6b-4269-b211-5e674bc461f1-kube-api-access-ktt84\") pod \"service-ca-operator-777779d784-btf7l\" (UID: \"17e46aa6-0f6b-4269-b211-5e674bc461f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.938629 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggmqx\" (UniqueName: \"kubernetes.io/projected/05d0077a-01e0-4674-ace4-775828fa38ec-kube-api-access-ggmqx\") pod \"olm-operator-6b444d44fb-6gzrh\" (UID: \"05d0077a-01e0-4674-ace4-775828fa38ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.948591 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfpk5\" (UniqueName: \"kubernetes.io/projected/6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3-kube-api-access-zfpk5\") pod \"packageserver-d55dfcdfc-tv9vp\" (UID: \"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.961541 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:24 crc kubenswrapper[4802]: E1004 04:48:24.962016 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:25.461993615 +0000 UTC m=+147.869994240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.967788 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmsh2\" (UniqueName: \"kubernetes.io/projected/7cf9948f-b0a7-414a-8d9c-79c8b6235799-kube-api-access-xmsh2\") pod \"multus-admission-controller-857f4d67dd-nzxp7\" (UID: \"7cf9948f-b0a7-414a-8d9c-79c8b6235799\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzxp7" Oct 04 04:48:24 crc kubenswrapper[4802]: I1004 04:48:24.990801 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6pkr\" (UniqueName: \"kubernetes.io/projected/44b8fdcf-0cac-45e3-9899-098633c7e336-kube-api-access-s6pkr\") pod \"catalog-operator-68c6474976-8582w\" (UID: \"44b8fdcf-0cac-45e3-9899-098633c7e336\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.013669 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-bound-sa-token\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.028999 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ch4cq"] Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.044391 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzxp7" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.056084 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.062795 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.063052 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:25.56299177 +0000 UTC m=+147.970992405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.063146 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.063698 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:25.563681073 +0000 UTC m=+147.971681698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.070966 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.073786 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh"] Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.075487 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.082295 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts9sn\" (UniqueName: \"kubernetes.io/projected/0d7c1e6b-fe65-4286-9d05-1f58e8708707-kube-api-access-ts9sn\") pod \"machine-config-server-d89bh\" (UID: \"0d7c1e6b-fe65-4286-9d05-1f58e8708707\") " pod="openshift-machine-config-operator/machine-config-server-d89bh" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.083224 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-f7dzh"] Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.085660 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npptv\" (UniqueName: \"kubernetes.io/projected/e2fdb3b2-9c9f-4667-9957-b64141823d4f-kube-api-access-npptv\") pod \"dns-default-28hfp\" (UID: \"e2fdb3b2-9c9f-4667-9957-b64141823d4f\") " pod="openshift-dns/dns-default-28hfp" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.091619 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5rd8\" (UniqueName: \"kubernetes.io/projected/a2720405-7826-4e0e-8fd2-68333d982ce4-kube-api-access-g5rd8\") pod \"ingress-canary-xwgg6\" (UID: \"a2720405-7826-4e0e-8fd2-68333d982ce4\") " pod="openshift-ingress-canary/ingress-canary-xwgg6" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.091901 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vmtw" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.101758 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.110721 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.127032 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.160351 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm"] Oct 04 04:48:25 crc kubenswrapper[4802]: W1004 04:48:25.163838 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2e4c58d_96fa_407f_9563_99d74e773bac.slice/crio-916c99acec6f499236c9b8352664031bb3e801b787dac627254d104792fb2255 WatchSource:0}: Error finding container 916c99acec6f499236c9b8352664031bb3e801b787dac627254d104792fb2255: Status 404 returned error can't find the container with id 916c99acec6f499236c9b8352664031bb3e801b787dac627254d104792fb2255 Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.164482 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2fjt\" (UniqueName: \"kubernetes.io/projected/dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e-kube-api-access-d2fjt\") pod \"csi-hostpathplugin-hqqtl\" (UID: \"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e\") " pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.164608 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.165225 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:25.665181785 +0000 UTC m=+148.073182410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.170608 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-28hfp" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.170693 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs"] Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.177111 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-d89bh" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.180633 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-skrqg"] Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.197356 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l"] Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.200575 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wdmm2"] Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.215882 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.228300 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xwgg6" Oct 04 04:48:25 crc kubenswrapper[4802]: W1004 04:48:25.256939 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82c63f0f_adca_43d1_832d_503873c327c3.slice/crio-9633167a0aa54d69dbe40b0478479c0c4d7aa90c9c8ae8b3a84c7d279676311d WatchSource:0}: Error finding container 9633167a0aa54d69dbe40b0478479c0c4d7aa90c9c8ae8b3a84c7d279676311d: Status 404 returned error can't find the container with id 9633167a0aa54d69dbe40b0478479c0c4d7aa90c9c8ae8b3a84c7d279676311d Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.273548 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.279603 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" event={"ID":"6f5d3f6c-6b78-44d8-826a-e49742556aaa","Type":"ContainerStarted","Data":"16bf97b9b774df7f998f84ec82db3657b9acb08291c8f0c412e9b391655a84c4"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.280911 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" event={"ID":"d71a3a0a-18b5-4783-8887-79f76803a121","Type":"ContainerStarted","Data":"7694b2620af66e89bcf204e0de6b86fb675c847f07b83ac6cab8c9d6d1dbe5d4"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.280942 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" event={"ID":"d71a3a0a-18b5-4783-8887-79f76803a121","Type":"ContainerStarted","Data":"ed7e699ff1936a7de10ce3b33bfa370156eda4715ec5820a62d6e1cbdede7b7f"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.284252 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" event={"ID":"92749979-4252-4f0e-a763-3db89c2a396c","Type":"ContainerStarted","Data":"41722425a30df9fb1f1a51e3379aa959274024e4b460dafcb073c8948b92a62f"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.285684 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" event={"ID":"ec973550-1440-4e1e-bcbd-34e56eae457b","Type":"ContainerStarted","Data":"ced8dc9c5cf9f06a4d391395dd6186e444f8d6d98b32d2ae8aea7929919d54bd"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.287158 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6fwp2" event={"ID":"64860eca-743c-423a-8ee4-a1e5fd4f667d","Type":"ContainerStarted","Data":"3597a482c8f4870ee16809ad1f87cf28b48145ff405d76c9c7490dcd889dde69"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.289603 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" event={"ID":"c9388f41-af8d-4194-a8d7-d32733cb786f","Type":"ContainerStarted","Data":"23d6dd67aba44c97de9f3a28fb7a46ef79e4d71ad013d45998ee48b9d26efc40"} Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.291785 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:25.791758033 +0000 UTC m=+148.199758658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.297532 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" event={"ID":"e7929a11-35ca-4d0c-9e5b-25c105355711","Type":"ContainerStarted","Data":"b98644e74aaccc58a901154d14e9ed164cf947f875c6bb6db6eb0bbb19427a4e"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.320913 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" event={"ID":"77fac432-21bd-4251-bb24-320cc71f536c","Type":"ContainerStarted","Data":"1498717e194fa4bc4c97b8e71ee18089f7de50c9d749534f001b8903208acdb8"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.322164 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.338914 4802 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-g7767 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.338998 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" podUID="77fac432-21bd-4251-bb24-320cc71f536c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.361771 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" event={"ID":"8307648c-2f7f-4558-aa5f-b629e157221d","Type":"ContainerStarted","Data":"c0466113c93e25fbca6bb8f9740870d796abe38fb56830a54a2ccfeab1adee4a"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.368010 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" event={"ID":"4bcd4770-0856-4233-a401-c8b8a18ec41a","Type":"ContainerStarted","Data":"285a118b52e5c737c725137dd1be2e386d78bda650d12bb67c63979eb8cfae4c"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.371586 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" event={"ID":"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa","Type":"ContainerStarted","Data":"4475d3f263a937946c51afadf170eb7ae5d1553905168b6fbc36ee110612fa58"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.374370 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-skrqg" event={"ID":"a011efc4-8846-45fd-8f1a-27d5907889bf","Type":"ContainerStarted","Data":"cf2dabe333279012f5bb0c3718a0f19a9bd4efda46c3b79ede8507bf767200c8"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.375967 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" event={"ID":"38bafc74-f498-4f4e-9b1d-5fbacfad12e8","Type":"ContainerStarted","Data":"f1dbf5404b596a374bf5520e8e96568617be388806c100057ca421325b69ce10"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.379377 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.379560 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:25.879528923 +0000 UTC m=+148.287529548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.379954 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.383131 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" event={"ID":"bf4310b6-043e-47e5-8519-9a513fb8da48","Type":"ContainerStarted","Data":"eb0760814014619d04efbff99031f9fec20c5878499cb6b84921937c76542282"} Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.383168 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:25.883149793 +0000 UTC m=+148.291150498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.387887 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" event={"ID":"82c63f0f-adca-43d1-832d-503873c327c3","Type":"ContainerStarted","Data":"9633167a0aa54d69dbe40b0478479c0c4d7aa90c9c8ae8b3a84c7d279676311d"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.423622 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" event={"ID":"b2e4c58d-96fa-407f-9563-99d74e773bac","Type":"ContainerStarted","Data":"916c99acec6f499236c9b8352664031bb3e801b787dac627254d104792fb2255"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.431237 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wdmm2" event={"ID":"966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53","Type":"ContainerStarted","Data":"12652c2597662b9c5f5dbb11c0e9a8ac6468f9634b2a2432b6344f5a1f50b94b"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.438392 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-q9tg7" event={"ID":"eee3cf4f-0b25-4641-865e-8f8101256453","Type":"ContainerStarted","Data":"0083c61f75fa9e638ece752de2e16d575836ff6dc0a084b43aa4572b962c2396"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.438455 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-q9tg7" event={"ID":"eee3cf4f-0b25-4641-865e-8f8101256453","Type":"ContainerStarted","Data":"8dce0d1fe0aae88ee8ce8a91f448711e12a66469926fe07f0fdcb2fa87020028"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.439040 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-q9tg7" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.440725 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kcw54" event={"ID":"0452346e-4ae6-4944-8203-fbf3c3273223","Type":"ContainerStarted","Data":"0b8f933c42cebaa8e77fdc88b96009223df9ee5927561d613e34096f0957e5cd"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.444216 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.444294 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.454264 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" event={"ID":"e2c4a774-339b-4503-a243-1bf95110d082","Type":"ContainerStarted","Data":"185f0d6664c884adf481990798216607517199ae3d7ce492187b25e774d17170"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.462326 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" event={"ID":"69012085-b35b-4167-aa20-cccec63cdda2","Type":"ContainerStarted","Data":"42f2287f686754c28882a9ccbfd9e5ad499ece94b00e6096b013c6f79c02cf78"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.462923 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" event={"ID":"69012085-b35b-4167-aa20-cccec63cdda2","Type":"ContainerStarted","Data":"5943246416ea68cb3414503d71bb3349ef9ee5113b248aaf344a566bdb71f75f"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.472903 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" event={"ID":"477b9cdc-eacf-45b7-b79f-dccfe481edc6","Type":"ContainerStarted","Data":"bdef5e26ad5676d8ed6416e7478728b7f38bca88f43ef01cf0ee61b4ef043885"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.478499 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" event={"ID":"30f3fcba-bfce-49ba-90b7-0af1be5c1b61","Type":"ContainerStarted","Data":"468cda6afce941bcd11f86bdc484e192b63ff39a6a893b888f56b168ea004011"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.483522 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.484302 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:25.984278091 +0000 UTC m=+148.392278716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.509429 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" event={"ID":"ac6100d3-2668-4b1e-a78a-6f0703eca64a","Type":"ContainerStarted","Data":"c79066d6812d5383d6b2dd4109e851cbc2b6dcee27a7233617a159960c872491"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.518099 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" event={"ID":"0b9f7844-b732-44d3-96a3-3cc28364fac8","Type":"ContainerStarted","Data":"d8617785334390b7342b835722089534e478426e2f71d8c30c1961d33702ba4b"} Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.585630 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.586234 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.086214198 +0000 UTC m=+148.494214823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.655964 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9vmtw"] Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.661396 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg"] Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.689959 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.690673 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.190624296 +0000 UTC m=+148.598624921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.704332 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-shvxg"] Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.708600 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh"] Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.729928 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" podStartSLOduration=126.729906442 podStartE2EDuration="2m6.729906442s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:25.728794101 +0000 UTC m=+148.136794746" watchObservedRunningTime="2025-10-04 04:48:25.729906442 +0000 UTC m=+148.137907067" Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.768256 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk"] Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.789626 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj"] Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.792681 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.793422 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.293404289 +0000 UTC m=+148.701404914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:25 crc kubenswrapper[4802]: W1004 04:48:25.810299 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89806d38_9d4c_4b53_b6b2_8e95599f16cc.slice/crio-25e93121d8ce199ba71a8edb1a3d63d3331610b5fee93411f7214b8366338f57 WatchSource:0}: Error finding container 25e93121d8ce199ba71a8edb1a3d63d3331610b5fee93411f7214b8366338f57: Status 404 returned error can't find the container with id 25e93121d8ce199ba71a8edb1a3d63d3331610b5fee93411f7214b8366338f57 Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.822910 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-btf7l"] Oct 04 04:48:25 crc kubenswrapper[4802]: W1004 04:48:25.829630 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02083d27_6ad7_4b28_8226_e9cc75dc55ba.slice/crio-5a9fde8e5baa387904fbf66a2b9a8d754ced673da9015027ec3874705aded4c9 WatchSource:0}: Error finding container 5a9fde8e5baa387904fbf66a2b9a8d754ced673da9015027ec3874705aded4c9: Status 404 returned error can't find the container with id 5a9fde8e5baa387904fbf66a2b9a8d754ced673da9015027ec3874705aded4c9 Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.894200 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.894399 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.394362494 +0000 UTC m=+148.802363119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.894651 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.895041 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.395026862 +0000 UTC m=+148.803027477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:25 crc kubenswrapper[4802]: W1004 04:48:25.969413 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e02eab6_078e_41f3_b53b_1fd83ce2a730.slice/crio-bcb4438d6a043a83ec5d9053a48a88945c830ae37c45254da1da9c0745b3d9d3 WatchSource:0}: Error finding container bcb4438d6a043a83ec5d9053a48a88945c830ae37c45254da1da9c0745b3d9d3: Status 404 returned error can't find the container with id bcb4438d6a043a83ec5d9053a48a88945c830ae37c45254da1da9c0745b3d9d3 Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.996335 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.996526 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.496496172 +0000 UTC m=+148.904496797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:25 crc kubenswrapper[4802]: I1004 04:48:25.996900 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:25 crc kubenswrapper[4802]: E1004 04:48:25.997251 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.497243333 +0000 UTC m=+148.905243948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.008192 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-t24w4" podStartSLOduration=127.008167548 podStartE2EDuration="2m7.008167548s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:26.006982795 +0000 UTC m=+148.414983420" watchObservedRunningTime="2025-10-04 04:48:26.008167548 +0000 UTC m=+148.416168183" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.024933 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-28hfp"] Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.047902 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6fwp2" podStartSLOduration=127.04788408 podStartE2EDuration="2m7.04788408s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:26.045976586 +0000 UTC m=+148.453977211" watchObservedRunningTime="2025-10-04 04:48:26.04788408 +0000 UTC m=+148.455884705" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.102248 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:26 crc kubenswrapper[4802]: E1004 04:48:26.102414 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.602384665 +0000 UTC m=+149.010385290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.102855 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:26 crc kubenswrapper[4802]: E1004 04:48:26.103264 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.603249459 +0000 UTC m=+149.011250084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:26 crc kubenswrapper[4802]: W1004 04:48:26.105272 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d7c1e6b_fe65_4286_9d05_1f58e8708707.slice/crio-000044e7ed8510d6e80c50a34078e7c3ebab55da5213292eab82bf120a9bc7f9 WatchSource:0}: Error finding container 000044e7ed8510d6e80c50a34078e7c3ebab55da5213292eab82bf120a9bc7f9: Status 404 returned error can't find the container with id 000044e7ed8510d6e80c50a34078e7c3ebab55da5213292eab82bf120a9bc7f9 Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.204853 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:26 crc kubenswrapper[4802]: E1004 04:48:26.208517 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.708470963 +0000 UTC m=+149.116471588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.310695 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:26 crc kubenswrapper[4802]: E1004 04:48:26.311409 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.811392063 +0000 UTC m=+149.219392688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.382341 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-q9tg7" podStartSLOduration=127.382318448 podStartE2EDuration="2m7.382318448s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:26.380729623 +0000 UTC m=+148.788730258" watchObservedRunningTime="2025-10-04 04:48:26.382318448 +0000 UTC m=+148.790319073" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.414787 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.414982 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.415060 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.415116 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.415150 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:48:26 crc kubenswrapper[4802]: E1004 04:48:26.416227 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:26.916205196 +0000 UTC m=+149.324205821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.423867 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.427309 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.430420 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.440559 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.478311 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.502590 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xwgg6"] Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.518386 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:26 crc kubenswrapper[4802]: E1004 04:48:26.518940 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.01892557 +0000 UTC m=+149.426926195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.563307 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" event={"ID":"b442b269-e453-4dc0-853b-7753bc0704a0","Type":"ContainerStarted","Data":"c9a84e8383f41e931736dfd8e6fe3a64cc938b5d1a015487f523b478738e800e"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.586158 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-28hfp" event={"ID":"e2fdb3b2-9c9f-4667-9957-b64141823d4f","Type":"ContainerStarted","Data":"1bab02e82bc4bba30dd1e361580575e41bea86fe0d63baf21f67faaf119e82fc"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.614236 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nzxp7"] Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.619404 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" event={"ID":"d7b15b12-47c1-4b49-8851-4e01097927d8","Type":"ContainerStarted","Data":"3df71d39115fa2cf5a309ca5378ff3dac325ad5dd6523bb76242a199f523c218"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.622178 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:26 crc kubenswrapper[4802]: E1004 04:48:26.622934 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.1229092 +0000 UTC m=+149.530909825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.655109 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" event={"ID":"05d0077a-01e0-4674-ace4-775828fa38ec","Type":"ContainerStarted","Data":"29f4fa195934cde223ed28f63182769ab367f919bde5bb4eb200aaaf6dc6adbb"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.666227 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" event={"ID":"ec973550-1440-4e1e-bcbd-34e56eae457b","Type":"ContainerStarted","Data":"0c1089d1382a388fc01914feba5e90a0f52e612324f487b665372f23a595ba67"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.679214 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w"] Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.680416 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.690671 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.696885 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" event={"ID":"bf4310b6-043e-47e5-8519-9a513fb8da48","Type":"ContainerStarted","Data":"f5100df8680cec1897cb6111d6fc2270f7245f73d6b7fdf7f51e567d8b954515"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.699519 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" event={"ID":"02083d27-6ad7-4b28-8226-e9cc75dc55ba","Type":"ContainerStarted","Data":"5a9fde8e5baa387904fbf66a2b9a8d754ced673da9015027ec3874705aded4c9"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.707223 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" event={"ID":"4bcd4770-0856-4233-a401-c8b8a18ec41a","Type":"ContainerStarted","Data":"41c55bbcd8f8ffe369ad0981fa23fca60ef09fc65e76a4b9131b92eb815f7707"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.714839 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" event={"ID":"17e46aa6-0f6b-4269-b211-5e674bc461f1","Type":"ContainerStarted","Data":"d7066ad95cc2a51e5af1819bf7d31671e892bd68aef3b59b3970d2065ec41e44"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.729804 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:26 crc kubenswrapper[4802]: E1004 04:48:26.730371 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.230357396 +0000 UTC m=+149.638358021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.780169 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hqqtl"] Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.783289 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cs42x"] Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.783320 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" event={"ID":"e7929a11-35ca-4d0c-9e5b-25c105355711","Type":"ContainerStarted","Data":"8a563d777eb832513469ca95a78bd32d6f57d8c55069fa71575a57f6ef25b74a"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.808563 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp"] Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.810840 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" podStartSLOduration=127.810813928 podStartE2EDuration="2m7.810813928s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:26.780394496 +0000 UTC m=+149.188395121" watchObservedRunningTime="2025-10-04 04:48:26.810813928 +0000 UTC m=+149.218814553" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.828602 4802 generic.go:334] "Generic (PLEG): container finished" podID="477b9cdc-eacf-45b7-b79f-dccfe481edc6" containerID="bdef5e26ad5676d8ed6416e7478728b7f38bca88f43ef01cf0ee61b4ef043885" exitCode=0 Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.828770 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" event={"ID":"477b9cdc-eacf-45b7-b79f-dccfe481edc6","Type":"ContainerDied","Data":"bdef5e26ad5676d8ed6416e7478728b7f38bca88f43ef01cf0ee61b4ef043885"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.831096 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:26 crc kubenswrapper[4802]: E1004 04:48:26.832600 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.332538676 +0000 UTC m=+149.740539301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.859570 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m2qbc" podStartSLOduration=127.859550351 podStartE2EDuration="2m7.859550351s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:26.804786909 +0000 UTC m=+149.212787544" watchObservedRunningTime="2025-10-04 04:48:26.859550351 +0000 UTC m=+149.267550976" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.863107 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:26 crc kubenswrapper[4802]: E1004 04:48:26.865527 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.365504278 +0000 UTC m=+149.773504983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.868963 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ck7kq" podStartSLOduration=127.868942654 podStartE2EDuration="2m7.868942654s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:26.85951268 +0000 UTC m=+149.267513305" watchObservedRunningTime="2025-10-04 04:48:26.868942654 +0000 UTC m=+149.276943269" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.874883 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kcw54" event={"ID":"0452346e-4ae6-4944-8203-fbf3c3273223","Type":"ContainerStarted","Data":"82c1697c891fdbab32afe7f25c40412766c75434ea6b168816547012ebea90d9"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.879175 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p" event={"ID":"8b6c8f8a-62a2-4a40-85f9-2fc713b2822a","Type":"ContainerStarted","Data":"ccfe3d7edabe0145e199407bd6a661fd5034f27c1dbc94a2a6787b29dbeba66e"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.886005 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj" event={"ID":"7e02eab6-078e-41f3-b53b-1fd83ce2a730","Type":"ContainerStarted","Data":"bcb4438d6a043a83ec5d9053a48a88945c830ae37c45254da1da9c0745b3d9d3"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.890291 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vmtw" event={"ID":"89806d38-9d4c-4b53-b6b2-8e95599f16cc","Type":"ContainerStarted","Data":"25e93121d8ce199ba71a8edb1a3d63d3331610b5fee93411f7214b8366338f57"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.891562 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" event={"ID":"6f5d3f6c-6b78-44d8-826a-e49742556aaa","Type":"ContainerStarted","Data":"dc09f1430efa431a010b43a78ec8fa59c997568d3b93f6ecc1b797429f108c8b"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.898059 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-d89bh" event={"ID":"0d7c1e6b-fe65-4286-9d05-1f58e8708707","Type":"ContainerStarted","Data":"000044e7ed8510d6e80c50a34078e7c3ebab55da5213292eab82bf120a9bc7f9"} Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.898587 4802 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-g7767 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.898632 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" podUID="77fac432-21bd-4251-bb24-320cc71f536c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.899429 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.899478 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:48:26 crc kubenswrapper[4802]: W1004 04:48:26.911512 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc36cbe1_f043_49df_bb90_158d61ac67ad.slice/crio-4777515bb1744dd02220bff607b6976c8032791ffa4fd6f05afe91c8b5b0eb87 WatchSource:0}: Error finding container 4777515bb1744dd02220bff607b6976c8032791ffa4fd6f05afe91c8b5b0eb87: Status 404 returned error can't find the container with id 4777515bb1744dd02220bff607b6976c8032791ffa4fd6f05afe91c8b5b0eb87 Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.913700 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-kcw54" podStartSLOduration=127.913672446 podStartE2EDuration="2m7.913672446s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:26.911700861 +0000 UTC m=+149.319701486" watchObservedRunningTime="2025-10-04 04:48:26.913672446 +0000 UTC m=+149.321673061" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.932379 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dttjb" podStartSLOduration=127.932363529 podStartE2EDuration="2m7.932363529s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:26.931802053 +0000 UTC m=+149.339802668" watchObservedRunningTime="2025-10-04 04:48:26.932363529 +0000 UTC m=+149.340364154" Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.964446 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:26 crc kubenswrapper[4802]: E1004 04:48:26.964678 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.464622232 +0000 UTC m=+149.872622857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:26 crc kubenswrapper[4802]: I1004 04:48:26.964983 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:26 crc kubenswrapper[4802]: E1004 04:48:26.966693 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.466677579 +0000 UTC m=+149.874678274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.066487 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.067130 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.567109909 +0000 UTC m=+149.975110534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: W1004 04:48:27.142732 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-ac8214a47f880c8351a1ba744f550f92e9db615d041c379f3af1c4545085049a WatchSource:0}: Error finding container ac8214a47f880c8351a1ba744f550f92e9db615d041c379f3af1c4545085049a: Status 404 returned error can't find the container with id ac8214a47f880c8351a1ba744f550f92e9db615d041c379f3af1c4545085049a Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.168451 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.169011 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.66899038 +0000 UTC m=+150.076991005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.269922 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.270138 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.77010805 +0000 UTC m=+150.178108665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.270303 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.270740 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.770721927 +0000 UTC m=+150.178722552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.371030 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.371239 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.871208639 +0000 UTC m=+150.279209284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.371593 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.372010 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.871989871 +0000 UTC m=+150.279990496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.443844 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.472414 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.472742 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.972690658 +0000 UTC m=+150.380691313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.472912 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.473140 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:27 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:27 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:27 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.473215 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.473505 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:27.973484621 +0000 UTC m=+150.381485246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.573909 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.574063 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.074038854 +0000 UTC m=+150.482039479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.574096 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.574410 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.074401635 +0000 UTC m=+150.482402260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.674983 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.675167 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.175127033 +0000 UTC m=+150.583127668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.675963 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.676069 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.176057179 +0000 UTC m=+150.584057804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.777191 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.777462 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.277419246 +0000 UTC m=+150.685419911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.777576 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.778087 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.278066404 +0000 UTC m=+150.686067069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.879036 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.879313 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.379288876 +0000 UTC m=+150.787289501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.902626 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" event={"ID":"b2e4c58d-96fa-407f-9563-99d74e773bac","Type":"ContainerStarted","Data":"49471e3955f45631567bed277a1e8a957ec7edbefdd605f5709468b169607659"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.903443 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xwgg6" event={"ID":"a2720405-7826-4e0e-8fd2-68333d982ce4","Type":"ContainerStarted","Data":"7fc8af09776e4c35401474323ff63c42658e4bb97a663d218c00dc945c3de695"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.904261 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"485bb087dd8ac8ffd0ab795cf43802b63c833bcfd9b8ee712a431e3ba2cc708e"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.905321 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" event={"ID":"a3e82d30-1b34-4bd3-b2bc-db3a6bf12caa","Type":"ContainerStarted","Data":"fdfab3984320ab08fb56a7b3be09cd49757c00e98634ca4f834f597befd827c5"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.907394 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" event={"ID":"38bafc74-f498-4f4e-9b1d-5fbacfad12e8","Type":"ContainerStarted","Data":"6fd552b0ac7bff93caacb503dbac528702cb394eae53d0eabeac0168d73b97bb"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.909560 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"481d7170392079108a3ad59edbf05a041d2766199428af7e4043ffc42c5bc89c"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.910674 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzxp7" event={"ID":"7cf9948f-b0a7-414a-8d9c-79c8b6235799","Type":"ContainerStarted","Data":"a52a1ab800c321cae4ea6822cff7337ef9657c2a8dbd1c57af5a826ab78f88b1"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.911907 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" event={"ID":"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e","Type":"ContainerStarted","Data":"0cbf66e6a03f0846f78c7e5147c05bad60c2621d0e1e21eb0e59f96fd74bb2d5"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.912743 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" event={"ID":"44b8fdcf-0cac-45e3-9899-098633c7e336","Type":"ContainerStarted","Data":"8ca15d5d3925580e6e6bbdbaa9fb871bc5088f07ae8034f2708dca5168bd494f"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.913510 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" event={"ID":"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3","Type":"ContainerStarted","Data":"4d92de9705df85495e00650f1489e7c996eb070a75c9c2a69a8b4a15fb4603ec"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.914544 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-skrqg" event={"ID":"a011efc4-8846-45fd-8f1a-27d5907889bf","Type":"ContainerStarted","Data":"c232bcf24a7575c9d1e14e2c974c8ed6e9b4aa8ad9d20ef64c38946301154d8a"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.915246 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ac8214a47f880c8351a1ba744f550f92e9db615d041c379f3af1c4545085049a"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.916172 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" event={"ID":"e2c4a774-339b-4503-a243-1bf95110d082","Type":"ContainerStarted","Data":"d79f09a374433cf8e4ed122d6f0be041e788fc3e1a70c99689f8ed2f78149e24"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.917968 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" event={"ID":"c9388f41-af8d-4194-a8d7-d32733cb786f","Type":"ContainerStarted","Data":"e3973e72ecd52be5899ccdf5b5625b3710776839becc65e31114b40723855f42"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.923333 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" event={"ID":"92749979-4252-4f0e-a763-3db89c2a396c","Type":"ContainerStarted","Data":"eab7d1f747d67f66fa65fe622806e7b77aa68dbcb946208aebff76b9efa28489"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.925753 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" event={"ID":"cc36cbe1-f043-49df-bb90-158d61ac67ad","Type":"ContainerStarted","Data":"4777515bb1744dd02220bff607b6976c8032791ffa4fd6f05afe91c8b5b0eb87"} Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.926214 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.926344 4802 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-g7767 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.926402 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" podUID="77fac432-21bd-4251-bb24-320cc71f536c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.926971 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-64gm5" podStartSLOduration=128.92695738 podStartE2EDuration="2m8.92695738s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:27.924483331 +0000 UTC m=+150.332483946" watchObservedRunningTime="2025-10-04 04:48:27.92695738 +0000 UTC m=+150.334958015" Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.928782 4802 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bw7xx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.928858 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" podUID="bf4310b6-043e-47e5-8519-9a513fb8da48" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Oct 04 04:48:27 crc kubenswrapper[4802]: I1004 04:48:27.980682 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:27 crc kubenswrapper[4802]: E1004 04:48:27.984568 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.484543991 +0000 UTC m=+150.892544706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.082268 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.082495 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.582463271 +0000 UTC m=+150.990463896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.082904 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.083368 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.583353926 +0000 UTC m=+150.991354551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.183814 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.183991 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.683952001 +0000 UTC m=+151.091952626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.184173 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.184488 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.684479826 +0000 UTC m=+151.092480451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.285011 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.285181 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.785152703 +0000 UTC m=+151.193153328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.285315 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.285632 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.785616236 +0000 UTC m=+151.193616861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.388390 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.388635 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.888554726 +0000 UTC m=+151.296555371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.388954 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.389529 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.889492713 +0000 UTC m=+151.297493328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.451127 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:28 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:28 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:28 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.451593 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.490933 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.491353 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:28.991288821 +0000 UTC m=+151.399289446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.592607 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.593083 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:29.093066659 +0000 UTC m=+151.501067284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.693935 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:29.193906491 +0000 UTC m=+151.601907116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.693819 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.694301 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.694704 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:29.194694093 +0000 UTC m=+151.602694718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.796009 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.796246 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:29.296197473 +0000 UTC m=+151.704198098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.796304 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.796695 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:29.296678967 +0000 UTC m=+151.704679652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.898610 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.898782 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:29.398752383 +0000 UTC m=+151.806753018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.899348 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:28 crc kubenswrapper[4802]: E1004 04:48:28.899794 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:29.399782452 +0000 UTC m=+151.807783077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.965273 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-d89bh" event={"ID":"0d7c1e6b-fe65-4286-9d05-1f58e8708707","Type":"ContainerStarted","Data":"576aaa8b80bdeaec97a47e40a9acbd512e7ceca100f9adfd837651370a2dec12"} Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.984429 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d5279d721b7f77f134c7cab6331cb04cd4ec62ab8a4eed041fd5bd29adc21ca8"} Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.985911 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" event={"ID":"cc36cbe1-f043-49df-bb90-158d61ac67ad","Type":"ContainerStarted","Data":"ff15c88f6a08fea5bd1405b4e5d5da882526680ef96cad33464f3c5cc69c8952"} Oct 04 04:48:28 crc kubenswrapper[4802]: I1004 04:48:28.989726 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" event={"ID":"c9388f41-af8d-4194-a8d7-d32733cb786f","Type":"ContainerStarted","Data":"081f21166ba00ff286700da827316665e1e3e304d1bca3fc2090e5697270e433"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:28.999984 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:29 crc kubenswrapper[4802]: E1004 04:48:29.000416 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:29.500376577 +0000 UTC m=+151.908377212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.002763 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" event={"ID":"44b8fdcf-0cac-45e3-9899-098633c7e336","Type":"ContainerStarted","Data":"a68e50e749d27a5f1efe7e9998c52978a43899077c4d3eddeddbe1d5d50de911"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.009056 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" event={"ID":"02083d27-6ad7-4b28-8226-e9cc75dc55ba","Type":"ContainerStarted","Data":"e4d9c9bf660b1f24f5932c5f0f171f304290934bc6f681d1c2726a5077dae73d"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.011318 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzxp7" event={"ID":"7cf9948f-b0a7-414a-8d9c-79c8b6235799","Type":"ContainerStarted","Data":"e995a6410e2cc35c61f586cba18706b04c077dc300a621da10b71b2cb7925a5b"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.015079 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" event={"ID":"477b9cdc-eacf-45b7-b79f-dccfe481edc6","Type":"ContainerStarted","Data":"15f56e31334980be93d3fc13b83898c6a3de9df6c29fe7b6eca730aacee24d63"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.028393 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wdmm2" event={"ID":"966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53","Type":"ContainerStarted","Data":"ef7ab5c24a84f92d6f51c99e96c81c7a1cd98f11fce9e3a40b76c4848b78eb6e"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.034245 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" event={"ID":"05d0077a-01e0-4674-ace4-775828fa38ec","Type":"ContainerStarted","Data":"62ac7f7ea5529547a2e470b1a25b2b90f24f73866a62fcbeea6c4a61f31f09b8"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.046612 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p" event={"ID":"8b6c8f8a-62a2-4a40-85f9-2fc713b2822a","Type":"ContainerStarted","Data":"a80922d241010d809c9b82f52a62ca92552d13a08f61ce21a9c6ad2a50ddebde"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.061886 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj" event={"ID":"7e02eab6-078e-41f3-b53b-1fd83ce2a730","Type":"ContainerStarted","Data":"9c355f3ee0de98115c0e462adebd43354e5cf64331cde6d6348e4b69b9a3a064"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.067212 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" event={"ID":"ec973550-1440-4e1e-bcbd-34e56eae457b","Type":"ContainerStarted","Data":"14e636abc9d50fe042ded44f2e977513208d3d1c0b06252fd54686e6fba8b3fa"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.078283 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" event={"ID":"82c63f0f-adca-43d1-832d-503873c327c3","Type":"ContainerStarted","Data":"1f32eb74b9bb4cae596a6fb9f6e75b5ab0c7d78a6fe58ae4907485b789af3508"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.086380 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" event={"ID":"b442b269-e453-4dc0-853b-7753bc0704a0","Type":"ContainerStarted","Data":"a7dad61971d52b798902eb0a8a65f0ec96a2d20baf3c42b841a13bf6c76fac31"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.101294 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:29 crc kubenswrapper[4802]: E1004 04:48:29.101721 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:29.601705821 +0000 UTC m=+152.009706446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.111890 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-28hfp" event={"ID":"e2fdb3b2-9c9f-4667-9957-b64141823d4f","Type":"ContainerStarted","Data":"7f1d15fd67c801733ba5ffa98b9507146895f993b30142095c03f5e2f5476f9c"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.131591 4802 generic.go:334] "Generic (PLEG): container finished" podID="0b9f7844-b732-44d3-96a3-3cc28364fac8" containerID="6eaacbbe65817d9dcd593e0bb623107a2bfa22cc7822d209ac7e771085947189" exitCode=0 Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.131770 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" event={"ID":"0b9f7844-b732-44d3-96a3-3cc28364fac8","Type":"ContainerDied","Data":"6eaacbbe65817d9dcd593e0bb623107a2bfa22cc7822d209ac7e771085947189"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.134183 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" event={"ID":"30f3fcba-bfce-49ba-90b7-0af1be5c1b61","Type":"ContainerStarted","Data":"dfae0c45ba7355b42c317b6f3b70ca75f3e1c87f2f6e253cd8c6126c24ea2234"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.140401 4802 generic.go:334] "Generic (PLEG): container finished" podID="8307648c-2f7f-4558-aa5f-b629e157221d" containerID="22fedd7a89488acf4632171749608c414dfd24a52793198f13e99bb4fb7ceafa" exitCode=0 Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.140540 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" event={"ID":"8307648c-2f7f-4558-aa5f-b629e157221d","Type":"ContainerDied","Data":"22fedd7a89488acf4632171749608c414dfd24a52793198f13e99bb4fb7ceafa"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.165081 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" event={"ID":"6f5d3f6c-6b78-44d8-826a-e49742556aaa","Type":"ContainerStarted","Data":"95260a2a374ef8b90c8f383f4444b1f51a7f5973952fba670dac36d1df97b365"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.176931 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e20abf7125bdb5a2e5840c08f67b960a6e688624c8d402d974116bdae1d8a072"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.181594 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tk7lm" podStartSLOduration=130.181571226 podStartE2EDuration="2m10.181571226s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:29.18135522 +0000 UTC m=+151.589355865" watchObservedRunningTime="2025-10-04 04:48:29.181571226 +0000 UTC m=+151.589571851" Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.188062 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" event={"ID":"ac6100d3-2668-4b1e-a78a-6f0703eca64a","Type":"ContainerStarted","Data":"aa04f9c080906b5127cf78a391dcb0df49c634fd10816a716bd3212da631351b"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.191071 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" event={"ID":"17e46aa6-0f6b-4269-b211-5e674bc461f1","Type":"ContainerStarted","Data":"7d8e1c44606e844e7cba4d6ff40cb89f8a1bf474853cdc0dd00e3de1192ab788"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.202673 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:29 crc kubenswrapper[4802]: E1004 04:48:29.203997 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:29.703962522 +0000 UTC m=+152.111963157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.215455 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" event={"ID":"38bafc74-f498-4f4e-9b1d-5fbacfad12e8","Type":"ContainerStarted","Data":"dac70f1ce0179084e88920f2340e24163835267003bae57e81d875853ec0040e"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.217213 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5dd977a814bcc3e38c88d1bb702a654d007e92ae1fe1d4a7b0fe7ec8391bd2d2"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.218333 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vmtw" event={"ID":"89806d38-9d4c-4b53-b6b2-8e95599f16cc","Type":"ContainerStarted","Data":"f7394f80b3f4226679060a1c3e2e986ec97bd62c7982c235071fedc63d117c49"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.220155 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" event={"ID":"d7b15b12-47c1-4b49-8851-4e01097927d8","Type":"ContainerStarted","Data":"b5b7af3ddffc549f41b44587e1f16b02bde9dd98ebb37093ba52bac3ee56e5e8"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.223481 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xwgg6" event={"ID":"a2720405-7826-4e0e-8fd2-68333d982ce4","Type":"ContainerStarted","Data":"8e4055f98911b1510283bd611347d9646d5cd4819b632d8d8f62aeb027f40d9f"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.229440 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" event={"ID":"6e8504d4-be58-4ea6-b8eb-8b8fe87ec8a3","Type":"ContainerStarted","Data":"9e0fe0c437b9c41cbc11acba7ace2e9bf700e84572e4303ece4a27cc58f6d3ba"} Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.230846 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.231531 4802 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bw7xx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.231567 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" podUID="bf4310b6-043e-47e5-8519-9a513fb8da48" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.239973 4802 patch_prober.go:28] interesting pod/console-operator-58897d9998-skrqg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.240092 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-skrqg" podUID="a011efc4-8846-45fd-8f1a-27d5907889bf" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.265258 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-skrqg" podStartSLOduration=130.265232117 podStartE2EDuration="2m10.265232117s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:29.255515755 +0000 UTC m=+151.663516380" watchObservedRunningTime="2025-10-04 04:48:29.265232117 +0000 UTC m=+151.673232742" Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.283522 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-f7dzh" podStartSLOduration=130.283502268 podStartE2EDuration="2m10.283502268s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:29.282098259 +0000 UTC m=+151.690098894" watchObservedRunningTime="2025-10-04 04:48:29.283502268 +0000 UTC m=+151.691502883" Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.304433 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:29 crc kubenswrapper[4802]: E1004 04:48:29.306212 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:29.806195323 +0000 UTC m=+152.214195948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.408935 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:29 crc kubenswrapper[4802]: E1004 04:48:29.409538 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:29.909511304 +0000 UTC m=+152.317511929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.446843 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:29 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:29 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:29 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.446904 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.511627 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:29 crc kubenswrapper[4802]: E1004 04:48:29.512115 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.012094245 +0000 UTC m=+152.420094870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.617292 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:29 crc kubenswrapper[4802]: E1004 04:48:29.618282 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.118233825 +0000 UTC m=+152.526234460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.720808 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:29 crc kubenswrapper[4802]: E1004 04:48:29.721298 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.221281698 +0000 UTC m=+152.629282323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.822212 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:29 crc kubenswrapper[4802]: E1004 04:48:29.822328 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.322293245 +0000 UTC m=+152.730293860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.822657 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:29 crc kubenswrapper[4802]: E1004 04:48:29.823057 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.323048986 +0000 UTC m=+152.731049611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.924187 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:29 crc kubenswrapper[4802]: E1004 04:48:29.924382 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.42434521 +0000 UTC m=+152.832345845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:29 crc kubenswrapper[4802]: I1004 04:48:29.924469 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:29 crc kubenswrapper[4802]: E1004 04:48:29.924813 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.424803323 +0000 UTC m=+152.832804018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.026074 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.026295 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.526259582 +0000 UTC m=+152.934260207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.026508 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.027025 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.527013583 +0000 UTC m=+152.935014208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.127702 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.127908 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.627875306 +0000 UTC m=+153.035875931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.128243 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.128676 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.628666308 +0000 UTC m=+153.036666933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.229284 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.229539 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.729482829 +0000 UTC m=+153.137483464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.229899 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.230270 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.73025219 +0000 UTC m=+153.138252815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.237171 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-28hfp" event={"ID":"e2fdb3b2-9c9f-4667-9957-b64141823d4f","Type":"ContainerStarted","Data":"14715a8857235a7defc67d94d5d00b2c3c6e88a06f008b6dd6999ee59a956b9a"} Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.238781 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" event={"ID":"92749979-4252-4f0e-a763-3db89c2a396c","Type":"ContainerStarted","Data":"63afa7fbccbaa9795ea1b041a3e039bbb6ff1d95d43961f909db8cbf284c40d6"} Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.241567 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vmtw" event={"ID":"89806d38-9d4c-4b53-b6b2-8e95599f16cc","Type":"ContainerStarted","Data":"c5bc50a771454d6ed863d2a3cb3ecd19fce766a32972ca560441d34a96ee2f5a"} Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.244308 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wdmm2" event={"ID":"966a13cc-cf50-4ac0-8bd0-8fd55bcb4b53","Type":"ContainerStarted","Data":"90e7ce08c94ae8837dd048d78ee90a9d1d3eac3203f96e6f14bcf6ab99179148"} Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.246236 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p" event={"ID":"8b6c8f8a-62a2-4a40-85f9-2fc713b2822a","Type":"ContainerStarted","Data":"2fe2ef0885fbad7bcdf4f23bf42706ee6389fd5ec5d3a5ce66bcec37f1c1d12f"} Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.249564 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" event={"ID":"e2c4a774-339b-4503-a243-1bf95110d082","Type":"ContainerStarted","Data":"a9795b8056f0920e6f85c69e4dd9db34d8c0758be513338c0d235557b18ff006"} Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.250567 4802 patch_prober.go:28] interesting pod/console-operator-58897d9998-skrqg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.250616 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-skrqg" podUID="a011efc4-8846-45fd-8f1a-27d5907889bf" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.250671 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.280466 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bhrfs" podStartSLOduration=131.280444485 podStartE2EDuration="2m11.280444485s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.276092463 +0000 UTC m=+152.684093088" watchObservedRunningTime="2025-10-04 04:48:30.280444485 +0000 UTC m=+152.688445110" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.332554 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.332735 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.832707867 +0000 UTC m=+153.240708492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.333863 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.333963 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8zrjj" podStartSLOduration=131.333940782 podStartE2EDuration="2m11.333940782s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.332403489 +0000 UTC m=+152.740404104" watchObservedRunningTime="2025-10-04 04:48:30.333940782 +0000 UTC m=+152.741941407" Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.336977 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.836955196 +0000 UTC m=+153.244956011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.458242 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.461316 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-btf7l" podStartSLOduration=131.461263035 podStartE2EDuration="2m11.461263035s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.397545162 +0000 UTC m=+152.805545797" watchObservedRunningTime="2025-10-04 04:48:30.461263035 +0000 UTC m=+152.869263660" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.462182 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lmfqg" podStartSLOduration=131.46217729 podStartE2EDuration="2m11.46217729s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.456979615 +0000 UTC m=+152.864980250" watchObservedRunningTime="2025-10-04 04:48:30.46217729 +0000 UTC m=+152.870177915" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.478966 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:30 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:30 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:30 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.479046 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.481585 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:30.981542482 +0000 UTC m=+153.389543107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.513753 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-shvxg" podStartSLOduration=131.513729243 podStartE2EDuration="2m11.513729243s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.513506917 +0000 UTC m=+152.921507542" watchObservedRunningTime="2025-10-04 04:48:30.513729243 +0000 UTC m=+152.921729868" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.556560 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p24bw" podStartSLOduration=131.55652998 podStartE2EDuration="2m11.55652998s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.554568846 +0000 UTC m=+152.962569471" watchObservedRunningTime="2025-10-04 04:48:30.55652998 +0000 UTC m=+152.964530605" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.584669 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.585102 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:31.085082929 +0000 UTC m=+153.493083554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.602245 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" podStartSLOduration=131.602227099 podStartE2EDuration="2m11.602227099s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.601785177 +0000 UTC m=+153.009785802" watchObservedRunningTime="2025-10-04 04:48:30.602227099 +0000 UTC m=+153.010227714" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.644527 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" podStartSLOduration=132.644503122 podStartE2EDuration="2m12.644503122s" podCreationTimestamp="2025-10-04 04:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.643497484 +0000 UTC m=+153.051498109" watchObservedRunningTime="2025-10-04 04:48:30.644503122 +0000 UTC m=+153.052503747" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.686753 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.687161 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:31.187140905 +0000 UTC m=+153.595141530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.714106 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" podStartSLOduration=131.714081789 podStartE2EDuration="2m11.714081789s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.709940223 +0000 UTC m=+153.117940868" watchObservedRunningTime="2025-10-04 04:48:30.714081789 +0000 UTC m=+153.122082424" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.756540 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" podStartSLOduration=131.756519117 podStartE2EDuration="2m11.756519117s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.755876139 +0000 UTC m=+153.163876764" watchObservedRunningTime="2025-10-04 04:48:30.756519117 +0000 UTC m=+153.164519742" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.787946 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blq8p" podStartSLOduration=131.787928405 podStartE2EDuration="2m11.787928405s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.78524158 +0000 UTC m=+153.193242235" watchObservedRunningTime="2025-10-04 04:48:30.787928405 +0000 UTC m=+153.195929030" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.789070 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.789513 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:31.289491749 +0000 UTC m=+153.697492424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.844310 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n76dh" podStartSLOduration=131.844284462 podStartE2EDuration="2m11.844284462s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.815786745 +0000 UTC m=+153.223787410" watchObservedRunningTime="2025-10-04 04:48:30.844284462 +0000 UTC m=+153.252285087" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.845876 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" podStartSLOduration=131.845866077 podStartE2EDuration="2m11.845866077s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.845743703 +0000 UTC m=+153.253744338" watchObservedRunningTime="2025-10-04 04:48:30.845866077 +0000 UTC m=+153.253866702" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.890174 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.890374 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:31.390347181 +0000 UTC m=+153.798347816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.890542 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.891054 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:31.39102794 +0000 UTC m=+153.799028585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.919499 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" podStartSLOduration=131.919474416 podStartE2EDuration="2m11.919474416s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.915085354 +0000 UTC m=+153.323085999" watchObservedRunningTime="2025-10-04 04:48:30.919474416 +0000 UTC m=+153.327475041" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.938306 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-72q7l" podStartSLOduration=131.938284333 podStartE2EDuration="2m11.938284333s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.932765618 +0000 UTC m=+153.340766263" watchObservedRunningTime="2025-10-04 04:48:30.938284333 +0000 UTC m=+153.346284958" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.953514 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-d89bh" podStartSLOduration=8.953485668 podStartE2EDuration="8.953485668s" podCreationTimestamp="2025-10-04 04:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.951789451 +0000 UTC m=+153.359790076" watchObservedRunningTime="2025-10-04 04:48:30.953485668 +0000 UTC m=+153.361486303" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.971568 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" podStartSLOduration=131.971544353 podStartE2EDuration="2m11.971544353s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:30.968392555 +0000 UTC m=+153.376393190" watchObservedRunningTime="2025-10-04 04:48:30.971544353 +0000 UTC m=+153.379544978" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.979618 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.980473 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.982406 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.982484 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.991694 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:30 crc kubenswrapper[4802]: I1004 04:48:30.991879 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 04 04:48:30 crc kubenswrapper[4802]: E1004 04:48:30.992125 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:31.492102769 +0000 UTC m=+153.900103394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.023787 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" podStartSLOduration=132.023763215 podStartE2EDuration="2m12.023763215s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:31.001919143 +0000 UTC m=+153.409919778" watchObservedRunningTime="2025-10-04 04:48:31.023763215 +0000 UTC m=+153.431763840" Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.023990 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xwgg6" podStartSLOduration=9.023985901 podStartE2EDuration="9.023985901s" podCreationTimestamp="2025-10-04 04:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:31.020257447 +0000 UTC m=+153.428258072" watchObservedRunningTime="2025-10-04 04:48:31.023985901 +0000 UTC m=+153.431986526" Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.052311 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-lm8jn" podStartSLOduration=132.052284743 podStartE2EDuration="2m12.052284743s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:31.051874281 +0000 UTC m=+153.459874906" watchObservedRunningTime="2025-10-04 04:48:31.052284743 +0000 UTC m=+153.460285368" Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.084496 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ztwc4" podStartSLOduration=133.084474053 podStartE2EDuration="2m13.084474053s" podCreationTimestamp="2025-10-04 04:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:31.083932978 +0000 UTC m=+153.491933603" watchObservedRunningTime="2025-10-04 04:48:31.084474053 +0000 UTC m=+153.492474678" Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.093050 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a004ef3-3111-4249-8d37-9f293f5a092a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0a004ef3-3111-4249-8d37-9f293f5a092a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.093142 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.093179 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a004ef3-3111-4249-8d37-9f293f5a092a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0a004ef3-3111-4249-8d37-9f293f5a092a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 04 04:48:31 crc kubenswrapper[4802]: E1004 04:48:31.093501 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:31.593486466 +0000 UTC m=+154.001487091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.194584 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.194839 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a004ef3-3111-4249-8d37-9f293f5a092a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0a004ef3-3111-4249-8d37-9f293f5a092a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.194926 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a004ef3-3111-4249-8d37-9f293f5a092a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0a004ef3-3111-4249-8d37-9f293f5a092a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 04 04:48:31 crc kubenswrapper[4802]: E1004 04:48:31.195400 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:31.695379407 +0000 UTC m=+154.103380042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.195447 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a004ef3-3111-4249-8d37-9f293f5a092a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0a004ef3-3111-4249-8d37-9f293f5a092a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.233939 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a004ef3-3111-4249-8d37-9f293f5a092a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0a004ef3-3111-4249-8d37-9f293f5a092a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.279907 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzxp7" event={"ID":"7cf9948f-b0a7-414a-8d9c-79c8b6235799","Type":"ContainerStarted","Data":"5ac3fe3a6177694d4d1e5d9ede3dd4146ba5a71b9d06fbc2f67a9b32fbfe53d4"} Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.296159 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:31 crc kubenswrapper[4802]: E1004 04:48:31.296670 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:31.79662163 +0000 UTC m=+154.204622325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.297082 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.312096 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wdmm2" podStartSLOduration=132.312075772 podStartE2EDuration="2m12.312075772s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:31.309780298 +0000 UTC m=+153.717780933" watchObservedRunningTime="2025-10-04 04:48:31.312075772 +0000 UTC m=+153.720076407" Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.397863 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:31 crc kubenswrapper[4802]: E1004 04:48:31.398109 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:31.898075089 +0000 UTC m=+154.306075714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.398556 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:31 crc kubenswrapper[4802]: E1004 04:48:31.400195 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:31.900168817 +0000 UTC m=+154.308169442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.456577 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:31 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:31 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:31 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.456654 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.499390 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:31 crc kubenswrapper[4802]: E1004 04:48:31.499843 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:31.999823926 +0000 UTC m=+154.407824551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.601655 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:31 crc kubenswrapper[4802]: E1004 04:48:31.602064 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:32.102046846 +0000 UTC m=+154.510047471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.613696 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 04 04:48:31 crc kubenswrapper[4802]: W1004 04:48:31.623542 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0a004ef3_3111_4249_8d37_9f293f5a092a.slice/crio-8fff7cb591d10617139d0bab10f138591c4c6f6afd8b4fec2901560904469255 WatchSource:0}: Error finding container 8fff7cb591d10617139d0bab10f138591c4c6f6afd8b4fec2901560904469255: Status 404 returned error can't find the container with id 8fff7cb591d10617139d0bab10f138591c4c6f6afd8b4fec2901560904469255 Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.702613 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:31 crc kubenswrapper[4802]: E1004 04:48:31.702809 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:32.202774685 +0000 UTC m=+154.610775310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.703367 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:31 crc kubenswrapper[4802]: E1004 04:48:31.703792 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:32.203778723 +0000 UTC m=+154.611779348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.805324 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:31 crc kubenswrapper[4802]: E1004 04:48:31.805536 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:32.30550219 +0000 UTC m=+154.713502815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.805654 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:31 crc kubenswrapper[4802]: E1004 04:48:31.805979 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:32.305972523 +0000 UTC m=+154.713973148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:31 crc kubenswrapper[4802]: I1004 04:48:31.906554 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:31 crc kubenswrapper[4802]: E1004 04:48:31.907016 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:32.40699351 +0000 UTC m=+154.814994135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.008712 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:32 crc kubenswrapper[4802]: E1004 04:48:32.009214 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:32.509193979 +0000 UTC m=+154.917194604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.110289 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:32 crc kubenswrapper[4802]: E1004 04:48:32.110519 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:32.610482894 +0000 UTC m=+155.018483519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.110661 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:32 crc kubenswrapper[4802]: E1004 04:48:32.111018 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:32.611004668 +0000 UTC m=+155.019005293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.212401 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:32 crc kubenswrapper[4802]: E1004 04:48:32.212780 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:32.712760086 +0000 UTC m=+155.120760701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.287231 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0a004ef3-3111-4249-8d37-9f293f5a092a","Type":"ContainerStarted","Data":"8fff7cb591d10617139d0bab10f138591c4c6f6afd8b4fec2901560904469255"} Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.289723 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" event={"ID":"0b9f7844-b732-44d3-96a3-3cc28364fac8","Type":"ContainerStarted","Data":"ebc31022cd8d059ffc86184a4671183cb93062e6aa52f94bd589d0d26674abb6"} Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.294965 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" event={"ID":"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e","Type":"ContainerStarted","Data":"396e1b0d6d3db2258b430ee1d6713f6f705871eb745482e3d5dee9cb86abab99"} Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.297041 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" event={"ID":"8307648c-2f7f-4558-aa5f-b629e157221d","Type":"ContainerStarted","Data":"da536358c30016b538afcac8e63212db37a336a5beed33fd22952109a736005a"} Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.297855 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-28hfp" Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.315310 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzxp7" podStartSLOduration=133.315291255 podStartE2EDuration="2m13.315291255s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:32.31510772 +0000 UTC m=+154.723108345" watchObservedRunningTime="2025-10-04 04:48:32.315291255 +0000 UTC m=+154.723291870" Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.315510 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:32 crc kubenswrapper[4802]: E1004 04:48:32.315894 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:32.815882831 +0000 UTC m=+155.223883456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.336788 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" podStartSLOduration=133.336766046 podStartE2EDuration="2m13.336766046s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:32.335527081 +0000 UTC m=+154.743527706" watchObservedRunningTime="2025-10-04 04:48:32.336766046 +0000 UTC m=+154.744766671" Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.361276 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-28hfp" podStartSLOduration=11.361258381 podStartE2EDuration="11.361258381s" podCreationTimestamp="2025-10-04 04:48:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:32.360014576 +0000 UTC m=+154.768015211" watchObservedRunningTime="2025-10-04 04:48:32.361258381 +0000 UTC m=+154.769259006" Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.381078 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vmtw" podStartSLOduration=133.381053635 podStartE2EDuration="2m13.381053635s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:32.378184785 +0000 UTC m=+154.786185410" watchObservedRunningTime="2025-10-04 04:48:32.381053635 +0000 UTC m=+154.789054260" Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.417092 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:32 crc kubenswrapper[4802]: E1004 04:48:32.417320 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:32.917291379 +0000 UTC m=+155.325292004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.417539 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:32 crc kubenswrapper[4802]: E1004 04:48:32.418424 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:32.91841694 +0000 UTC m=+155.326417565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.449833 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:32 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:32 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:32 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.449891 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.521915 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:32 crc kubenswrapper[4802]: E1004 04:48:32.522305 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.022286057 +0000 UTC m=+155.430286682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.624056 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:32 crc kubenswrapper[4802]: E1004 04:48:32.624437 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.124423315 +0000 UTC m=+155.532423950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.725452 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:32 crc kubenswrapper[4802]: E1004 04:48:32.733619 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.233589669 +0000 UTC m=+155.641590314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.803832 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.806910 4802 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7w462 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.806979 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" podUID="477b9cdc-eacf-45b7-b79f-dccfe481edc6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.806931 4802 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7w462 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.807061 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" podUID="477b9cdc-eacf-45b7-b79f-dccfe481edc6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.807496 4802 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7w462 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.807530 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" podUID="477b9cdc-eacf-45b7-b79f-dccfe481edc6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.829417 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:32 crc kubenswrapper[4802]: E1004 04:48:32.829786 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.3297702 +0000 UTC m=+155.737770825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.930453 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:32 crc kubenswrapper[4802]: E1004 04:48:32.930761 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.430719795 +0000 UTC m=+155.838720430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:32 crc kubenswrapper[4802]: I1004 04:48:32.930882 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:32 crc kubenswrapper[4802]: E1004 04:48:32.931299 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.431284501 +0000 UTC m=+155.839285116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.032189 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.032392 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.532352859 +0000 UTC m=+155.940353474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.032841 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.033419 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.533404808 +0000 UTC m=+155.941405433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.134124 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.134361 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.634319992 +0000 UTC m=+156.042320617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.134395 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.134768 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.634754364 +0000 UTC m=+156.042754989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.235328 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.235502 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.735476043 +0000 UTC m=+156.143476668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.235597 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.236044 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.736027568 +0000 UTC m=+156.144028193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.306050 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" event={"ID":"0b9f7844-b732-44d3-96a3-3cc28364fac8","Type":"ContainerStarted","Data":"b53fbd5db873cc99311ae0755d756570849468e2ba47337e6082ba149a24f5b3"} Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.308204 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0a004ef3-3111-4249-8d37-9f293f5a092a","Type":"ContainerStarted","Data":"055f05f0f96670f0e18547f64b46b17a1ab853e2f6b0fd95f9f25fc9b66d2de2"} Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.336340 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.336881 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.83685858 +0000 UTC m=+156.244859205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.437935 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.438428 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:33.938402851 +0000 UTC m=+156.346403486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.450704 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:33 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:33 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:33 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.450785 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.539063 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.539328 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.039300384 +0000 UTC m=+156.447301019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.539513 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.540023 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.040000304 +0000 UTC m=+156.448000999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.641246 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.641462 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.141426962 +0000 UTC m=+156.549427587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.641529 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.641932 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.141924486 +0000 UTC m=+156.549925111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.696801 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.697853 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.699891 4802 patch_prober.go:28] interesting pod/console-f9d7485db-6fwp2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.699971 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6fwp2" podUID="64860eca-743c-423a-8ee4-a1e5fd4f667d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.723428 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.742555 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.742802 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.242766018 +0000 UTC m=+156.650766643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.742913 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.743270 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.243256752 +0000 UTC m=+156.651257377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.835353 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.844712 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.846451 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.346414598 +0000 UTC m=+156.754415373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.890245 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.890278 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.890322 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.890354 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:48:33 crc kubenswrapper[4802]: I1004 04:48:33.947182 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:33 crc kubenswrapper[4802]: E1004 04:48:33.948124 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.448100694 +0000 UTC m=+156.856101319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.048217 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.048418 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.548369009 +0000 UTC m=+156.956369644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.048515 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.049099 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.549080269 +0000 UTC m=+156.957080894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.127338 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.127415 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.130035 4802 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-z72dq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.37:8443/livez\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.130116 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" podUID="8307648c-2f7f-4558-aa5f-b629e157221d" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.37:8443/livez\": dial tcp 10.217.0.37:8443: connect: connection refused" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.149603 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.149806 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.649773857 +0000 UTC m=+157.057774482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.149867 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.150232 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.650217279 +0000 UTC m=+157.058217904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.251117 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.251315 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.751277617 +0000 UTC m=+157.159278252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.251738 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.252795 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.752783069 +0000 UTC m=+157.160783914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.306387 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.315304 4802 generic.go:334] "Generic (PLEG): container finished" podID="0a004ef3-3111-4249-8d37-9f293f5a092a" containerID="055f05f0f96670f0e18547f64b46b17a1ab853e2f6b0fd95f9f25fc9b66d2de2" exitCode=0 Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.315369 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0a004ef3-3111-4249-8d37-9f293f5a092a","Type":"ContainerDied","Data":"055f05f0f96670f0e18547f64b46b17a1ab853e2f6b0fd95f9f25fc9b66d2de2"} Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.317459 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" event={"ID":"d7b15b12-47c1-4b49-8851-4e01097927d8","Type":"ContainerDied","Data":"b5b7af3ddffc549f41b44587e1f16b02bde9dd98ebb37093ba52bac3ee56e5e8"} Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.317409 4802 generic.go:334] "Generic (PLEG): container finished" podID="d7b15b12-47c1-4b49-8851-4e01097927d8" containerID="b5b7af3ddffc549f41b44587e1f16b02bde9dd98ebb37093ba52bac3ee56e5e8" exitCode=0 Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.352813 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.353200 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.853182159 +0000 UTC m=+157.261182784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.356369 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-skrqg" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.432259 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" podStartSLOduration=135.432234791 podStartE2EDuration="2m15.432234791s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:34.415123222 +0000 UTC m=+156.823123847" watchObservedRunningTime="2025-10-04 04:48:34.432234791 +0000 UTC m=+156.840235416" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.443378 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.447773 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:34 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:34 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:34 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.447848 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.454664 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.458097 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:34.958077334 +0000 UTC m=+157.366078049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.556074 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.557470 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:35.057443974 +0000 UTC m=+157.465444599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.657501 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.658035 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:35.158010979 +0000 UTC m=+157.566011604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.723248 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.723944 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.738915 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.740913 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.746956 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.758697 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.758951 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:35.258915812 +0000 UTC m=+157.666916437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.759071 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.759457 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:35.259441577 +0000 UTC m=+157.667442202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.860669 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.860794 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:35.360769132 +0000 UTC m=+157.768769767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.860860 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/181c60ac-6236-4b41-9d35-4c897b437ae3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"181c60ac-6236-4b41-9d35-4c897b437ae3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.860902 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/181c60ac-6236-4b41-9d35-4c897b437ae3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"181c60ac-6236-4b41-9d35-4c897b437ae3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.860961 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.861433 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:35.36140978 +0000 UTC m=+157.769410485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.911244 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.962156 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.962399 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/181c60ac-6236-4b41-9d35-4c897b437ae3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"181c60ac-6236-4b41-9d35-4c897b437ae3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.962430 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/181c60ac-6236-4b41-9d35-4c897b437ae3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"181c60ac-6236-4b41-9d35-4c897b437ae3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 04 04:48:34 crc kubenswrapper[4802]: E1004 04:48:34.962840 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:35.462823008 +0000 UTC m=+157.870823633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:34 crc kubenswrapper[4802]: I1004 04:48:34.962874 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/181c60ac-6236-4b41-9d35-4c897b437ae3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"181c60ac-6236-4b41-9d35-4c897b437ae3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.018333 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/181c60ac-6236-4b41-9d35-4c897b437ae3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"181c60ac-6236-4b41-9d35-4c897b437ae3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.039603 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.060652 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.063391 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:35 crc kubenswrapper[4802]: E1004 04:48:35.064628 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:35.564613916 +0000 UTC m=+157.972614541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.077084 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.092891 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6gzrh" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.098128 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8582w" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.103261 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.128618 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.146123 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.165013 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:35 crc kubenswrapper[4802]: E1004 04:48:35.166369 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:35.666348903 +0000 UTC m=+158.074349528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.268013 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:35 crc kubenswrapper[4802]: E1004 04:48:35.270075 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:35.770062625 +0000 UTC m=+158.178063250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.370160 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:35 crc kubenswrapper[4802]: E1004 04:48:35.370591 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:35.870571058 +0000 UTC m=+158.278571683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.453842 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:35 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:35 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:35 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.453901 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.472711 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:35 crc kubenswrapper[4802]: E1004 04:48:35.473999 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:35.973980741 +0000 UTC m=+158.381981466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.576369 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:35 crc kubenswrapper[4802]: E1004 04:48:35.576824 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:36.076793618 +0000 UTC m=+158.484794243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.630428 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tv9vp" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.677763 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:35 crc kubenswrapper[4802]: E1004 04:48:35.678378 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:36.17835548 +0000 UTC m=+158.586356105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.724282 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.778532 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:35 crc kubenswrapper[4802]: E1004 04:48:35.779787 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:36.279763498 +0000 UTC m=+158.687764123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.781985 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.813549 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7w462" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.851408 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wkmdf"] Oct 04 04:48:35 crc kubenswrapper[4802]: E1004 04:48:35.851704 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b15b12-47c1-4b49-8851-4e01097927d8" containerName="collect-profiles" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.851727 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b15b12-47c1-4b49-8851-4e01097927d8" containerName="collect-profiles" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.851866 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b15b12-47c1-4b49-8851-4e01097927d8" containerName="collect-profiles" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.854044 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.857180 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.867498 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkmdf"] Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.880187 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7b15b12-47c1-4b49-8851-4e01097927d8-secret-volume\") pod \"d7b15b12-47c1-4b49-8851-4e01097927d8\" (UID: \"d7b15b12-47c1-4b49-8851-4e01097927d8\") " Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.880280 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7b15b12-47c1-4b49-8851-4e01097927d8-config-volume\") pod \"d7b15b12-47c1-4b49-8851-4e01097927d8\" (UID: \"d7b15b12-47c1-4b49-8851-4e01097927d8\") " Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.880324 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pxnn\" (UniqueName: \"kubernetes.io/projected/d7b15b12-47c1-4b49-8851-4e01097927d8-kube-api-access-2pxnn\") pod \"d7b15b12-47c1-4b49-8851-4e01097927d8\" (UID: \"d7b15b12-47c1-4b49-8851-4e01097927d8\") " Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.880528 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:35 crc kubenswrapper[4802]: E1004 04:48:35.883620 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:36.383602684 +0000 UTC m=+158.791603309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.884722 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b15b12-47c1-4b49-8851-4e01097927d8-config-volume" (OuterVolumeSpecName: "config-volume") pod "d7b15b12-47c1-4b49-8851-4e01097927d8" (UID: "d7b15b12-47c1-4b49-8851-4e01097927d8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.887501 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.887600 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b15b12-47c1-4b49-8851-4e01097927d8-kube-api-access-2pxnn" (OuterVolumeSpecName: "kube-api-access-2pxnn") pod "d7b15b12-47c1-4b49-8851-4e01097927d8" (UID: "d7b15b12-47c1-4b49-8851-4e01097927d8"). InnerVolumeSpecName "kube-api-access-2pxnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.889344 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b15b12-47c1-4b49-8851-4e01097927d8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d7b15b12-47c1-4b49-8851-4e01097927d8" (UID: "d7b15b12-47c1-4b49-8851-4e01097927d8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.982087 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a004ef3-3111-4249-8d37-9f293f5a092a-kube-api-access\") pod \"0a004ef3-3111-4249-8d37-9f293f5a092a\" (UID: \"0a004ef3-3111-4249-8d37-9f293f5a092a\") " Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.982323 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.982367 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a004ef3-3111-4249-8d37-9f293f5a092a-kubelet-dir\") pod \"0a004ef3-3111-4249-8d37-9f293f5a092a\" (UID: \"0a004ef3-3111-4249-8d37-9f293f5a092a\") " Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.982533 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-utilities\") pod \"community-operators-wkmdf\" (UID: \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\") " pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.982569 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-catalog-content\") pod \"community-operators-wkmdf\" (UID: \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\") " pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.982619 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s72b\" (UniqueName: \"kubernetes.io/projected/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-kube-api-access-5s72b\") pod \"community-operators-wkmdf\" (UID: \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\") " pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.982900 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a004ef3-3111-4249-8d37-9f293f5a092a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0a004ef3-3111-4249-8d37-9f293f5a092a" (UID: "0a004ef3-3111-4249-8d37-9f293f5a092a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:48:35 crc kubenswrapper[4802]: E1004 04:48:35.983576 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:36.48353224 +0000 UTC m=+158.891532865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.983834 4802 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7b15b12-47c1-4b49-8851-4e01097927d8-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.983862 4802 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7b15b12-47c1-4b49-8851-4e01097927d8-config-volume\") on node \"crc\" DevicePath \"\"" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.983880 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pxnn\" (UniqueName: \"kubernetes.io/projected/d7b15b12-47c1-4b49-8851-4e01097927d8-kube-api-access-2pxnn\") on node \"crc\" DevicePath \"\"" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.983893 4802 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a004ef3-3111-4249-8d37-9f293f5a092a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 04 04:48:35 crc kubenswrapper[4802]: I1004 04:48:35.986117 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a004ef3-3111-4249-8d37-9f293f5a092a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0a004ef3-3111-4249-8d37-9f293f5a092a" (UID: "0a004ef3-3111-4249-8d37-9f293f5a092a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.035669 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8frq9"] Oct 04 04:48:36 crc kubenswrapper[4802]: E1004 04:48:36.035921 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a004ef3-3111-4249-8d37-9f293f5a092a" containerName="pruner" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.035937 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a004ef3-3111-4249-8d37-9f293f5a092a" containerName="pruner" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.036052 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a004ef3-3111-4249-8d37-9f293f5a092a" containerName="pruner" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.036875 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.039892 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.043763 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8frq9"] Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.085783 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-utilities\") pod \"community-operators-wkmdf\" (UID: \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\") " pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.086095 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-catalog-content\") pod \"community-operators-wkmdf\" (UID: \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\") " pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.086127 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.086157 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s72b\" (UniqueName: \"kubernetes.io/projected/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-kube-api-access-5s72b\") pod \"community-operators-wkmdf\" (UID: \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\") " pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.086203 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a004ef3-3111-4249-8d37-9f293f5a092a-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.086673 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-utilities\") pod \"community-operators-wkmdf\" (UID: \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\") " pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.086937 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-catalog-content\") pod \"community-operators-wkmdf\" (UID: \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\") " pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:48:36 crc kubenswrapper[4802]: E1004 04:48:36.087054 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:36.587033556 +0000 UTC m=+158.995034181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.106245 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s72b\" (UniqueName: \"kubernetes.io/projected/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-kube-api-access-5s72b\") pod \"community-operators-wkmdf\" (UID: \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\") " pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.187420 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.187797 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30c5f5e-8390-4c6c-9dff-07157aa29319-catalog-content\") pod \"certified-operators-8frq9\" (UID: \"d30c5f5e-8390-4c6c-9dff-07157aa29319\") " pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.187858 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30c5f5e-8390-4c6c-9dff-07157aa29319-utilities\") pod \"certified-operators-8frq9\" (UID: \"d30c5f5e-8390-4c6c-9dff-07157aa29319\") " pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.187906 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwmxp\" (UniqueName: \"kubernetes.io/projected/d30c5f5e-8390-4c6c-9dff-07157aa29319-kube-api-access-kwmxp\") pod \"certified-operators-8frq9\" (UID: \"d30c5f5e-8390-4c6c-9dff-07157aa29319\") " pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:48:36 crc kubenswrapper[4802]: E1004 04:48:36.188501 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:36.688474265 +0000 UTC m=+159.096474900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.197179 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.237069 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bq9qt"] Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.238278 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.243895 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bq9qt"] Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.288732 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30c5f5e-8390-4c6c-9dff-07157aa29319-catalog-content\") pod \"certified-operators-8frq9\" (UID: \"d30c5f5e-8390-4c6c-9dff-07157aa29319\") " pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.288799 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30c5f5e-8390-4c6c-9dff-07157aa29319-utilities\") pod \"certified-operators-8frq9\" (UID: \"d30c5f5e-8390-4c6c-9dff-07157aa29319\") " pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.288837 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.288858 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwmxp\" (UniqueName: \"kubernetes.io/projected/d30c5f5e-8390-4c6c-9dff-07157aa29319-kube-api-access-kwmxp\") pod \"certified-operators-8frq9\" (UID: \"d30c5f5e-8390-4c6c-9dff-07157aa29319\") " pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.289713 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30c5f5e-8390-4c6c-9dff-07157aa29319-catalog-content\") pod \"certified-operators-8frq9\" (UID: \"d30c5f5e-8390-4c6c-9dff-07157aa29319\") " pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.289715 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30c5f5e-8390-4c6c-9dff-07157aa29319-utilities\") pod \"certified-operators-8frq9\" (UID: \"d30c5f5e-8390-4c6c-9dff-07157aa29319\") " pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:48:36 crc kubenswrapper[4802]: E1004 04:48:36.289825 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:36.789813229 +0000 UTC m=+159.197813844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.321689 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwmxp\" (UniqueName: \"kubernetes.io/projected/d30c5f5e-8390-4c6c-9dff-07157aa29319-kube-api-access-kwmxp\") pod \"certified-operators-8frq9\" (UID: \"d30c5f5e-8390-4c6c-9dff-07157aa29319\") " pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.337607 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" event={"ID":"d7b15b12-47c1-4b49-8851-4e01097927d8","Type":"ContainerDied","Data":"3df71d39115fa2cf5a309ca5378ff3dac325ad5dd6523bb76242a199f523c218"} Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.337669 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3df71d39115fa2cf5a309ca5378ff3dac325ad5dd6523bb76242a199f523c218" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.337757 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.339980 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0a004ef3-3111-4249-8d37-9f293f5a092a","Type":"ContainerDied","Data":"8fff7cb591d10617139d0bab10f138591c4c6f6afd8b4fec2901560904469255"} Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.340001 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fff7cb591d10617139d0bab10f138591c4c6f6afd8b4fec2901560904469255" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.340001 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.341361 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"181c60ac-6236-4b41-9d35-4c897b437ae3","Type":"ContainerStarted","Data":"8be932c5646a9f21030f870396baf69b47d2fbbd2c2c54716b0cca6c5f79843c"} Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.372142 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.389913 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:36 crc kubenswrapper[4802]: E1004 04:48:36.390148 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:36.890111046 +0000 UTC m=+159.298111671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.390220 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c067018f-e13d-4b01-a1f7-49528fcd6397-catalog-content\") pod \"community-operators-bq9qt\" (UID: \"c067018f-e13d-4b01-a1f7-49528fcd6397\") " pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.390330 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.390396 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c067018f-e13d-4b01-a1f7-49528fcd6397-utilities\") pod \"community-operators-bq9qt\" (UID: \"c067018f-e13d-4b01-a1f7-49528fcd6397\") " pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.390689 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg7m2\" (UniqueName: \"kubernetes.io/projected/c067018f-e13d-4b01-a1f7-49528fcd6397-kube-api-access-xg7m2\") pod \"community-operators-bq9qt\" (UID: \"c067018f-e13d-4b01-a1f7-49528fcd6397\") " pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:48:36 crc kubenswrapper[4802]: E1004 04:48:36.391290 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:36.891243037 +0000 UTC m=+159.299243662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.432473 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkmdf"] Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.447398 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:36 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:36 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:36 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.447693 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.448470 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-76vwx"] Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.450000 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.461119 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-76vwx"] Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.492517 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.492727 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c067018f-e13d-4b01-a1f7-49528fcd6397-catalog-content\") pod \"community-operators-bq9qt\" (UID: \"c067018f-e13d-4b01-a1f7-49528fcd6397\") " pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.492777 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c067018f-e13d-4b01-a1f7-49528fcd6397-utilities\") pod \"community-operators-bq9qt\" (UID: \"c067018f-e13d-4b01-a1f7-49528fcd6397\") " pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.492833 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg7m2\" (UniqueName: \"kubernetes.io/projected/c067018f-e13d-4b01-a1f7-49528fcd6397-kube-api-access-xg7m2\") pod \"community-operators-bq9qt\" (UID: \"c067018f-e13d-4b01-a1f7-49528fcd6397\") " pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:48:36 crc kubenswrapper[4802]: E1004 04:48:36.493292 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:36.993274093 +0000 UTC m=+159.401274718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.494595 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c067018f-e13d-4b01-a1f7-49528fcd6397-catalog-content\") pod \"community-operators-bq9qt\" (UID: \"c067018f-e13d-4b01-a1f7-49528fcd6397\") " pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.494999 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c067018f-e13d-4b01-a1f7-49528fcd6397-utilities\") pod \"community-operators-bq9qt\" (UID: \"c067018f-e13d-4b01-a1f7-49528fcd6397\") " pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.517582 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg7m2\" (UniqueName: \"kubernetes.io/projected/c067018f-e13d-4b01-a1f7-49528fcd6397-kube-api-access-xg7m2\") pod \"community-operators-bq9qt\" (UID: \"c067018f-e13d-4b01-a1f7-49528fcd6397\") " pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.568237 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.595779 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.595847 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn2r9\" (UniqueName: \"kubernetes.io/projected/175ac8cc-efac-4fd1-ba01-e224ab93757f-kube-api-access-wn2r9\") pod \"certified-operators-76vwx\" (UID: \"175ac8cc-efac-4fd1-ba01-e224ab93757f\") " pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.595890 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175ac8cc-efac-4fd1-ba01-e224ab93757f-catalog-content\") pod \"certified-operators-76vwx\" (UID: \"175ac8cc-efac-4fd1-ba01-e224ab93757f\") " pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.595932 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175ac8cc-efac-4fd1-ba01-e224ab93757f-utilities\") pod \"certified-operators-76vwx\" (UID: \"175ac8cc-efac-4fd1-ba01-e224ab93757f\") " pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:48:36 crc kubenswrapper[4802]: E1004 04:48:36.596235 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:37.096217063 +0000 UTC m=+159.504217678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.692172 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.698007 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.698248 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn2r9\" (UniqueName: \"kubernetes.io/projected/175ac8cc-efac-4fd1-ba01-e224ab93757f-kube-api-access-wn2r9\") pod \"certified-operators-76vwx\" (UID: \"175ac8cc-efac-4fd1-ba01-e224ab93757f\") " pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.698278 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175ac8cc-efac-4fd1-ba01-e224ab93757f-catalog-content\") pod \"certified-operators-76vwx\" (UID: \"175ac8cc-efac-4fd1-ba01-e224ab93757f\") " pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.698309 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175ac8cc-efac-4fd1-ba01-e224ab93757f-utilities\") pod \"certified-operators-76vwx\" (UID: \"175ac8cc-efac-4fd1-ba01-e224ab93757f\") " pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.698812 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175ac8cc-efac-4fd1-ba01-e224ab93757f-utilities\") pod \"certified-operators-76vwx\" (UID: \"175ac8cc-efac-4fd1-ba01-e224ab93757f\") " pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:48:36 crc kubenswrapper[4802]: E1004 04:48:36.698891 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:37.198873506 +0000 UTC m=+159.606874131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.699855 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175ac8cc-efac-4fd1-ba01-e224ab93757f-catalog-content\") pod \"certified-operators-76vwx\" (UID: \"175ac8cc-efac-4fd1-ba01-e224ab93757f\") " pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.714985 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.717155 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn2r9\" (UniqueName: \"kubernetes.io/projected/175ac8cc-efac-4fd1-ba01-e224ab93757f-kube-api-access-wn2r9\") pod \"certified-operators-76vwx\" (UID: \"175ac8cc-efac-4fd1-ba01-e224ab93757f\") " pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.773104 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.774372 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bq9qt"] Oct 04 04:48:36 crc kubenswrapper[4802]: W1004 04:48:36.789732 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc067018f_e13d_4b01_a1f7_49528fcd6397.slice/crio-ad1d754f3ad6236c5d56b5b4b780336e09e14f1e5e359332f71220efe4aa840c WatchSource:0}: Error finding container ad1d754f3ad6236c5d56b5b4b780336e09e14f1e5e359332f71220efe4aa840c: Status 404 returned error can't find the container with id ad1d754f3ad6236c5d56b5b4b780336e09e14f1e5e359332f71220efe4aa840c Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.801474 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:36 crc kubenswrapper[4802]: E1004 04:48:36.802227 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:37.302202987 +0000 UTC m=+159.710203712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.823332 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8frq9"] Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.902614 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:36 crc kubenswrapper[4802]: E1004 04:48:36.903006 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:37.402972297 +0000 UTC m=+159.810972922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:36 crc kubenswrapper[4802]: I1004 04:48:36.903387 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:36 crc kubenswrapper[4802]: E1004 04:48:36.903813 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:37.40379776 +0000 UTC m=+159.811798385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.004614 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.004841 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:37.504802086 +0000 UTC m=+159.912802711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.005124 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.005518 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:37.505509386 +0000 UTC m=+159.913510011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.012672 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-76vwx"] Oct 04 04:48:37 crc kubenswrapper[4802]: W1004 04:48:37.029125 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod175ac8cc_efac_4fd1_ba01_e224ab93757f.slice/crio-c0cec0ba99603ce0c43df3ef5323b105360f2ae5895145851d14635642664efc WatchSource:0}: Error finding container c0cec0ba99603ce0c43df3ef5323b105360f2ae5895145851d14635642664efc: Status 404 returned error can't find the container with id c0cec0ba99603ce0c43df3ef5323b105360f2ae5895145851d14635642664efc Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.109021 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.109411 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:37.609387733 +0000 UTC m=+160.017388358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.210943 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.211277 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:37.711261914 +0000 UTC m=+160.119262539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.312473 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.312694 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:37.812659071 +0000 UTC m=+160.220659696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.312899 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.313262 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:37.813247957 +0000 UTC m=+160.221248582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.346135 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9qt" event={"ID":"c067018f-e13d-4b01-a1f7-49528fcd6397","Type":"ContainerStarted","Data":"ad1d754f3ad6236c5d56b5b4b780336e09e14f1e5e359332f71220efe4aa840c"} Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.347098 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76vwx" event={"ID":"175ac8cc-efac-4fd1-ba01-e224ab93757f","Type":"ContainerStarted","Data":"c0cec0ba99603ce0c43df3ef5323b105360f2ae5895145851d14635642664efc"} Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.347739 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkmdf" event={"ID":"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3","Type":"ContainerStarted","Data":"ccdd38f286918461a5badb4c9d7bdfb05cd950d550a0102927995ae283b16f58"} Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.348444 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8frq9" event={"ID":"d30c5f5e-8390-4c6c-9dff-07157aa29319","Type":"ContainerStarted","Data":"14be587530e59fe2d4373d8706a0ebb67c55af54c5056cc12e064f581cd32a15"} Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.414540 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.414805 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:37.914766558 +0000 UTC m=+160.322767193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.414875 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.415264 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:37.915247772 +0000 UTC m=+160.323248387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.447683 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:37 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:37 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:37 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.447757 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.516184 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.516378 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.016353371 +0000 UTC m=+160.424353996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.516999 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.517469 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.017448771 +0000 UTC m=+160.425449396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.618252 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.618484 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.118451178 +0000 UTC m=+160.526451803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.618562 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.618989 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.118980202 +0000 UTC m=+160.526980827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.719778 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.719884 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.219859935 +0000 UTC m=+160.627860560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.720043 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.720412 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.220403191 +0000 UTC m=+160.628403816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.821457 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.821731 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.321681255 +0000 UTC m=+160.729681920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.821866 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.822269 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.322255491 +0000 UTC m=+160.730256116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.837412 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qwg79"] Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.838791 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.840588 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.895626 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwg79"] Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.923512 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.923768 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9d61a9-b985-45da-bacf-c9fb55b52b66-utilities\") pod \"redhat-marketplace-qwg79\" (UID: \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\") " pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:48:37 crc kubenswrapper[4802]: E1004 04:48:37.923869 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.423842353 +0000 UTC m=+160.831842998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.924104 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9d61a9-b985-45da-bacf-c9fb55b52b66-catalog-content\") pod \"redhat-marketplace-qwg79\" (UID: \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\") " pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:48:37 crc kubenswrapper[4802]: I1004 04:48:37.924143 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pnhh\" (UniqueName: \"kubernetes.io/projected/9f9d61a9-b985-45da-bacf-c9fb55b52b66-kube-api-access-4pnhh\") pod \"redhat-marketplace-qwg79\" (UID: \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\") " pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.026338 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9d61a9-b985-45da-bacf-c9fb55b52b66-catalog-content\") pod \"redhat-marketplace-qwg79\" (UID: \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\") " pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.026418 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pnhh\" (UniqueName: \"kubernetes.io/projected/9f9d61a9-b985-45da-bacf-c9fb55b52b66-kube-api-access-4pnhh\") pod \"redhat-marketplace-qwg79\" (UID: \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\") " pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.026494 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9d61a9-b985-45da-bacf-c9fb55b52b66-utilities\") pod \"redhat-marketplace-qwg79\" (UID: \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\") " pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.026545 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.026903 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9d61a9-b985-45da-bacf-c9fb55b52b66-catalog-content\") pod \"redhat-marketplace-qwg79\" (UID: \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\") " pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:48:38 crc kubenswrapper[4802]: E1004 04:48:38.027102 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.527080642 +0000 UTC m=+160.935081307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.027144 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9d61a9-b985-45da-bacf-c9fb55b52b66-utilities\") pod \"redhat-marketplace-qwg79\" (UID: \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\") " pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.048224 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pnhh\" (UniqueName: \"kubernetes.io/projected/9f9d61a9-b985-45da-bacf-c9fb55b52b66-kube-api-access-4pnhh\") pod \"redhat-marketplace-qwg79\" (UID: \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\") " pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.128093 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:38 crc kubenswrapper[4802]: E1004 04:48:38.128393 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.628353876 +0000 UTC m=+161.036354501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.128490 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:38 crc kubenswrapper[4802]: E1004 04:48:38.128866 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.628851 +0000 UTC m=+161.036851625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.165978 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.229973 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:38 crc kubenswrapper[4802]: E1004 04:48:38.230119 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.730089483 +0000 UTC m=+161.138090108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.230304 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:38 crc kubenswrapper[4802]: E1004 04:48:38.230627 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.730619718 +0000 UTC m=+161.138620343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.240013 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-678bk"] Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.241702 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.250628 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-678bk"] Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.333471 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:38 crc kubenswrapper[4802]: E1004 04:48:38.333813 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.833772704 +0000 UTC m=+161.241773339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.334205 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a072d485-e16f-4778-8ca1-6b48bbc9f397-catalog-content\") pod \"redhat-marketplace-678bk\" (UID: \"a072d485-e16f-4778-8ca1-6b48bbc9f397\") " pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.334303 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.334385 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a072d485-e16f-4778-8ca1-6b48bbc9f397-utilities\") pod \"redhat-marketplace-678bk\" (UID: \"a072d485-e16f-4778-8ca1-6b48bbc9f397\") " pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.334504 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngxwh\" (UniqueName: \"kubernetes.io/projected/a072d485-e16f-4778-8ca1-6b48bbc9f397-kube-api-access-ngxwh\") pod \"redhat-marketplace-678bk\" (UID: \"a072d485-e16f-4778-8ca1-6b48bbc9f397\") " pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:48:38 crc kubenswrapper[4802]: E1004 04:48:38.334893 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.834878175 +0000 UTC m=+161.242878800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.357122 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"181c60ac-6236-4b41-9d35-4c897b437ae3","Type":"ContainerStarted","Data":"c68368f80d7eb138780a777dcc1ea59e5858ebe98df00c0718c0e67a5c742e2b"} Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.436275 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.436613 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngxwh\" (UniqueName: \"kubernetes.io/projected/a072d485-e16f-4778-8ca1-6b48bbc9f397-kube-api-access-ngxwh\") pod \"redhat-marketplace-678bk\" (UID: \"a072d485-e16f-4778-8ca1-6b48bbc9f397\") " pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.436679 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a072d485-e16f-4778-8ca1-6b48bbc9f397-catalog-content\") pod \"redhat-marketplace-678bk\" (UID: \"a072d485-e16f-4778-8ca1-6b48bbc9f397\") " pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.436729 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a072d485-e16f-4778-8ca1-6b48bbc9f397-utilities\") pod \"redhat-marketplace-678bk\" (UID: \"a072d485-e16f-4778-8ca1-6b48bbc9f397\") " pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.437202 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a072d485-e16f-4778-8ca1-6b48bbc9f397-utilities\") pod \"redhat-marketplace-678bk\" (UID: \"a072d485-e16f-4778-8ca1-6b48bbc9f397\") " pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:48:38 crc kubenswrapper[4802]: E1004 04:48:38.437308 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:38.937283321 +0000 UTC m=+161.345283946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.437907 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a072d485-e16f-4778-8ca1-6b48bbc9f397-catalog-content\") pod \"redhat-marketplace-678bk\" (UID: \"a072d485-e16f-4778-8ca1-6b48bbc9f397\") " pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.447099 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwg79"] Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.458370 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:38 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:38 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:38 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.458437 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.464258 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngxwh\" (UniqueName: \"kubernetes.io/projected/a072d485-e16f-4778-8ca1-6b48bbc9f397-kube-api-access-ngxwh\") pod \"redhat-marketplace-678bk\" (UID: \"a072d485-e16f-4778-8ca1-6b48bbc9f397\") " pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.537561 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:38 crc kubenswrapper[4802]: E1004 04:48:38.537935 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:39.037921367 +0000 UTC m=+161.445921992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.560550 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.638966 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:38 crc kubenswrapper[4802]: E1004 04:48:38.639916 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:39.13988222 +0000 UTC m=+161.547882845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.740990 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:38 crc kubenswrapper[4802]: E1004 04:48:38.741429 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:39.241414981 +0000 UTC m=+161.649415606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.766590 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-678bk"] Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.842511 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:38 crc kubenswrapper[4802]: E1004 04:48:38.842911 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:39.34289059 +0000 UTC m=+161.750891215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:38 crc kubenswrapper[4802]: I1004 04:48:38.943708 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:38 crc kubenswrapper[4802]: E1004 04:48:38.944293 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:39.444263797 +0000 UTC m=+161.852264452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.045044 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.045335 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:39.545296694 +0000 UTC m=+161.953297329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.045440 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.045874 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:39.54586349 +0000 UTC m=+161.953864125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.078894 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.078969 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.089020 4802 patch_prober.go:28] interesting pod/apiserver-76f77b778f-7qhp4 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 04 04:48:39 crc kubenswrapper[4802]: [+]log ok Oct 04 04:48:39 crc kubenswrapper[4802]: [+]etcd ok Oct 04 04:48:39 crc kubenswrapper[4802]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 04 04:48:39 crc kubenswrapper[4802]: [+]poststarthook/generic-apiserver-start-informers ok Oct 04 04:48:39 crc kubenswrapper[4802]: [+]poststarthook/max-in-flight-filter ok Oct 04 04:48:39 crc kubenswrapper[4802]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 04 04:48:39 crc kubenswrapper[4802]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 04 04:48:39 crc kubenswrapper[4802]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Oct 04 04:48:39 crc kubenswrapper[4802]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Oct 04 04:48:39 crc kubenswrapper[4802]: [+]poststarthook/project.openshift.io-projectcache ok Oct 04 04:48:39 crc kubenswrapper[4802]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 04 04:48:39 crc kubenswrapper[4802]: [+]poststarthook/openshift.io-startinformers ok Oct 04 04:48:39 crc kubenswrapper[4802]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 04 04:48:39 crc kubenswrapper[4802]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 04 04:48:39 crc kubenswrapper[4802]: livez check failed Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.089078 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" podUID="0b9f7844-b732-44d3-96a3-3cc28364fac8" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.133018 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.146320 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.147006 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:39.64698545 +0000 UTC m=+162.054986075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.149920 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z72dq" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.239345 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-klg7m"] Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.240385 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.243018 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.247840 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.249267 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:39.749243621 +0000 UTC m=+162.157244306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.255466 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-klg7m"] Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.348574 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.348698 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:39.848667883 +0000 UTC m=+162.256668518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.349026 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88d556c4-4775-463e-bfc7-c766fa10fce2-utilities\") pod \"redhat-operators-klg7m\" (UID: \"88d556c4-4775-463e-bfc7-c766fa10fce2\") " pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.349090 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8w4b\" (UniqueName: \"kubernetes.io/projected/88d556c4-4775-463e-bfc7-c766fa10fce2-kube-api-access-c8w4b\") pod \"redhat-operators-klg7m\" (UID: \"88d556c4-4775-463e-bfc7-c766fa10fce2\") " pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.349117 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88d556c4-4775-463e-bfc7-c766fa10fce2-catalog-content\") pod \"redhat-operators-klg7m\" (UID: \"88d556c4-4775-463e-bfc7-c766fa10fce2\") " pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.349164 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.349482 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:39.849470796 +0000 UTC m=+162.257471421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.368005 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8frq9" event={"ID":"d30c5f5e-8390-4c6c-9dff-07157aa29319","Type":"ContainerStarted","Data":"bf80a7c6868ccc541247c2959c2a14a66c207ad8ac8b346495913ea9e948a51f"} Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.370447 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9qt" event={"ID":"c067018f-e13d-4b01-a1f7-49528fcd6397","Type":"ContainerStarted","Data":"febd2030d0b49b82e0e08295982121ef08ddffb05f603f34a39462049773da01"} Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.371358 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-678bk" event={"ID":"a072d485-e16f-4778-8ca1-6b48bbc9f397","Type":"ContainerStarted","Data":"aadaecee8b77482a2240b7ee8d83c25f7866680ae68459a2a84f037a14976018"} Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.372192 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkmdf" event={"ID":"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3","Type":"ContainerStarted","Data":"0640c9a459829e77fea175ce27bb4297899e903dd0edea91e4c1bdd2cb16197d"} Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.372889 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwg79" event={"ID":"9f9d61a9-b985-45da-bacf-c9fb55b52b66","Type":"ContainerStarted","Data":"c330bbea77d72b1fb7db9daa155a0e2559442f60e8d4f04ed7dfc1a027bad612"} Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.387420 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd30c5f5e_8390_4c6c_9dff_07157aa29319.slice/crio-bf80a7c6868ccc541247c2959c2a14a66c207ad8ac8b346495913ea9e948a51f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc067018f_e13d_4b01_a1f7_49528fcd6397.slice/crio-febd2030d0b49b82e0e08295982121ef08ddffb05f603f34a39462049773da01.scope\": RecentStats: unable to find data in memory cache]" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.447931 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:39 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:39 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:39 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.448433 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.450693 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.451021 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:39.950999247 +0000 UTC m=+162.358999872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.451090 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88d556c4-4775-463e-bfc7-c766fa10fce2-utilities\") pod \"redhat-operators-klg7m\" (UID: \"88d556c4-4775-463e-bfc7-c766fa10fce2\") " pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.451155 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8w4b\" (UniqueName: \"kubernetes.io/projected/88d556c4-4775-463e-bfc7-c766fa10fce2-kube-api-access-c8w4b\") pod \"redhat-operators-klg7m\" (UID: \"88d556c4-4775-463e-bfc7-c766fa10fce2\") " pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.451382 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88d556c4-4775-463e-bfc7-c766fa10fce2-catalog-content\") pod \"redhat-operators-klg7m\" (UID: \"88d556c4-4775-463e-bfc7-c766fa10fce2\") " pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.451841 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88d556c4-4775-463e-bfc7-c766fa10fce2-utilities\") pod \"redhat-operators-klg7m\" (UID: \"88d556c4-4775-463e-bfc7-c766fa10fce2\") " pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.452060 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88d556c4-4775-463e-bfc7-c766fa10fce2-catalog-content\") pod \"redhat-operators-klg7m\" (UID: \"88d556c4-4775-463e-bfc7-c766fa10fce2\") " pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.469610 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8w4b\" (UniqueName: \"kubernetes.io/projected/88d556c4-4775-463e-bfc7-c766fa10fce2-kube-api-access-c8w4b\") pod \"redhat-operators-klg7m\" (UID: \"88d556c4-4775-463e-bfc7-c766fa10fce2\") " pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.553370 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.553935 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.053908546 +0000 UTC m=+162.461909171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.561756 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.646002 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m6c8r"] Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.647899 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.654961 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.655202 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.15516091 +0000 UTC m=+162.563161545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.655321 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.655894 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.15587452 +0000 UTC m=+162.563875165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.667790 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m6c8r"] Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.757204 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.757385 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.257347079 +0000 UTC m=+162.665347704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.757798 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdk4f\" (UniqueName: \"kubernetes.io/projected/6dcaa0a7-ecad-45f4-9dec-510b38c25042-kube-api-access-wdk4f\") pod \"redhat-operators-m6c8r\" (UID: \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\") " pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.757833 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcaa0a7-ecad-45f4-9dec-510b38c25042-utilities\") pod \"redhat-operators-m6c8r\" (UID: \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\") " pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.757912 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.757954 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcaa0a7-ecad-45f4-9dec-510b38c25042-catalog-content\") pod \"redhat-operators-m6c8r\" (UID: \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\") " pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.758325 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.258303806 +0000 UTC m=+162.666304441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.766887 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-klg7m"] Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.858845 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.859156 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.359117576 +0000 UTC m=+162.767118201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.859239 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.859293 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcaa0a7-ecad-45f4-9dec-510b38c25042-catalog-content\") pod \"redhat-operators-m6c8r\" (UID: \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\") " pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.859349 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcaa0a7-ecad-45f4-9dec-510b38c25042-utilities\") pod \"redhat-operators-m6c8r\" (UID: \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\") " pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.859376 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdk4f\" (UniqueName: \"kubernetes.io/projected/6dcaa0a7-ecad-45f4-9dec-510b38c25042-kube-api-access-wdk4f\") pod \"redhat-operators-m6c8r\" (UID: \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\") " pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.859727 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.359715993 +0000 UTC m=+162.767716618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.859944 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcaa0a7-ecad-45f4-9dec-510b38c25042-catalog-content\") pod \"redhat-operators-m6c8r\" (UID: \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\") " pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.860245 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcaa0a7-ecad-45f4-9dec-510b38c25042-utilities\") pod \"redhat-operators-m6c8r\" (UID: \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\") " pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.880925 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdk4f\" (UniqueName: \"kubernetes.io/projected/6dcaa0a7-ecad-45f4-9dec-510b38c25042-kube-api-access-wdk4f\") pod \"redhat-operators-m6c8r\" (UID: \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\") " pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.961058 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.961340 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.461294855 +0000 UTC m=+162.869295530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.961773 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:39 crc kubenswrapper[4802]: E1004 04:48:39.962129 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.462109868 +0000 UTC m=+162.870110493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:39 crc kubenswrapper[4802]: I1004 04:48:39.984183 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.063621 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.063788 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.563761522 +0000 UTC m=+162.971762157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.064391 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.064842 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.564830232 +0000 UTC m=+162.972830857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.165819 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.166206 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.666167868 +0000 UTC m=+163.074168493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.166557 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.166940 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.666926469 +0000 UTC m=+163.074927094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.178728 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-28hfp" Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.251067 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m6c8r"] Oct 04 04:48:40 crc kubenswrapper[4802]: W1004 04:48:40.256913 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dcaa0a7_ecad_45f4_9dec_510b38c25042.slice/crio-0a66442fa17acff7b51d8455b19b370f8bfcdc81382cf75be7d65e02a85d4480 WatchSource:0}: Error finding container 0a66442fa17acff7b51d8455b19b370f8bfcdc81382cf75be7d65e02a85d4480: Status 404 returned error can't find the container with id 0a66442fa17acff7b51d8455b19b370f8bfcdc81382cf75be7d65e02a85d4480 Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.270867 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.271029 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.770999051 +0000 UTC m=+163.178999676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.272086 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.272585 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.772562185 +0000 UTC m=+163.180562880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.375731 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.376399 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.87637723 +0000 UTC m=+163.284377855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.385768 4802 generic.go:334] "Generic (PLEG): container finished" podID="c067018f-e13d-4b01-a1f7-49528fcd6397" containerID="febd2030d0b49b82e0e08295982121ef08ddffb05f603f34a39462049773da01" exitCode=0 Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.385881 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9qt" event={"ID":"c067018f-e13d-4b01-a1f7-49528fcd6397","Type":"ContainerDied","Data":"febd2030d0b49b82e0e08295982121ef08ddffb05f603f34a39462049773da01"} Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.389928 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76vwx" event={"ID":"175ac8cc-efac-4fd1-ba01-e224ab93757f","Type":"ContainerStarted","Data":"60c9cee0ff5bc0ea7d24689dd6d5f21ba5c1e5be8d35370e7d0139de92c299e1"} Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.390959 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klg7m" event={"ID":"88d556c4-4775-463e-bfc7-c766fa10fce2","Type":"ContainerStarted","Data":"82662a051e4f8677812dd3735e02dfe546d2a6495558988fc64eac5036c96e32"} Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.392551 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6c8r" event={"ID":"6dcaa0a7-ecad-45f4-9dec-510b38c25042","Type":"ContainerStarted","Data":"0a66442fa17acff7b51d8455b19b370f8bfcdc81382cf75be7d65e02a85d4480"} Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.446704 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:40 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:40 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:40 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.446773 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.477513 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.478070 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:40.978047735 +0000 UTC m=+163.386048360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.581278 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.581948 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.081906551 +0000 UTC m=+163.489907186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.683608 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.684046 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.184029879 +0000 UTC m=+163.592030504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.785052 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.785659 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.285622462 +0000 UTC m=+163.693623077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.886975 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.887406 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.387388729 +0000 UTC m=+163.795389354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.988670 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.988924 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.488879459 +0000 UTC m=+163.896880104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:40 crc kubenswrapper[4802]: I1004 04:48:40.989034 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:40 crc kubenswrapper[4802]: E1004 04:48:40.989350 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.489336342 +0000 UTC m=+163.897336967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.091136 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.091519 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.59147941 +0000 UTC m=+163.999480035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.091621 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.091697 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.092128 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.592108108 +0000 UTC m=+164.000108733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.097985 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d189ff1-3446-47fe-bcea-6b09e72a4567-metrics-certs\") pod \"network-metrics-daemon-n27xq\" (UID: \"0d189ff1-3446-47fe-bcea-6b09e72a4567\") " pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.186225 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n27xq" Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.192951 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.193143 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.693102194 +0000 UTC m=+164.101102839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.193435 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.193829 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.693819074 +0000 UTC m=+164.101819699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.294255 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.294863 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.794840181 +0000 UTC m=+164.202840806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.370328 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=7.370303312 podStartE2EDuration="7.370303312s" podCreationTimestamp="2025-10-04 04:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:40.410416903 +0000 UTC m=+162.818417548" watchObservedRunningTime="2025-10-04 04:48:41.370303312 +0000 UTC m=+163.778303947" Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.374810 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n27xq"] Oct 04 04:48:41 crc kubenswrapper[4802]: W1004 04:48:41.387609 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d189ff1_3446_47fe_bcea_6b09e72a4567.slice/crio-074bb793302de64464f4544e5a8274ebcf2fefa5584862a2bb83d565451077b5 WatchSource:0}: Error finding container 074bb793302de64464f4544e5a8274ebcf2fefa5584862a2bb83d565451077b5: Status 404 returned error can't find the container with id 074bb793302de64464f4544e5a8274ebcf2fefa5584862a2bb83d565451077b5 Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.395853 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.396369 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.896353131 +0000 UTC m=+164.304353756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.401390 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n27xq" event={"ID":"0d189ff1-3446-47fe-bcea-6b09e72a4567","Type":"ContainerStarted","Data":"074bb793302de64464f4544e5a8274ebcf2fefa5584862a2bb83d565451077b5"} Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.403416 4802 generic.go:334] "Generic (PLEG): container finished" podID="01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" containerID="0640c9a459829e77fea175ce27bb4297899e903dd0edea91e4c1bdd2cb16197d" exitCode=0 Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.403479 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkmdf" event={"ID":"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3","Type":"ContainerDied","Data":"0640c9a459829e77fea175ce27bb4297899e903dd0edea91e4c1bdd2cb16197d"} Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.405265 4802 generic.go:334] "Generic (PLEG): container finished" podID="d30c5f5e-8390-4c6c-9dff-07157aa29319" containerID="bf80a7c6868ccc541247c2959c2a14a66c207ad8ac8b346495913ea9e948a51f" exitCode=0 Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.405295 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8frq9" event={"ID":"d30c5f5e-8390-4c6c-9dff-07157aa29319","Type":"ContainerDied","Data":"bf80a7c6868ccc541247c2959c2a14a66c207ad8ac8b346495913ea9e948a51f"} Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.447517 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:41 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:41 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:41 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.447595 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.497609 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.497830 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.997764749 +0000 UTC m=+164.405765374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.498327 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.498777 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:41.998764867 +0000 UTC m=+164.406765502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.599595 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.599773 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:42.099747573 +0000 UTC m=+164.507748208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.600064 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.600384 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:42.10037363 +0000 UTC m=+164.508374255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.700922 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.701152 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:42.201118389 +0000 UTC m=+164.609119014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.701187 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.701577 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:42.201569312 +0000 UTC m=+164.609569937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.801938 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.802180 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:42.302129716 +0000 UTC m=+164.710130381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.802362 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.804265 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:42.304230895 +0000 UTC m=+164.712231560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.903856 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.904020 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:42.403994376 +0000 UTC m=+164.811994991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:41 crc kubenswrapper[4802]: I1004 04:48:41.904178 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:41 crc kubenswrapper[4802]: E1004 04:48:41.904601 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:42.404588843 +0000 UTC m=+164.812589478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.006037 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:42 crc kubenswrapper[4802]: E1004 04:48:42.006736 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:42.50669142 +0000 UTC m=+164.914692055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.108036 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:42 crc kubenswrapper[4802]: E1004 04:48:42.108797 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:42.608771047 +0000 UTC m=+165.016771692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.209605 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:42 crc kubenswrapper[4802]: E1004 04:48:42.210082 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:42.71002191 +0000 UTC m=+165.118022585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.311822 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:42 crc kubenswrapper[4802]: E1004 04:48:42.312348 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:42.812325332 +0000 UTC m=+165.220325967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.413377 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:42 crc kubenswrapper[4802]: E1004 04:48:42.413896 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:42.913837183 +0000 UTC m=+165.321837848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.421774 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwg79" event={"ID":"9f9d61a9-b985-45da-bacf-c9fb55b52b66","Type":"ContainerStarted","Data":"120ff22186badd6c4a1c2a721de20e31e24d72382ec0ec3877d97f18d830ec45"} Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.423953 4802 generic.go:334] "Generic (PLEG): container finished" podID="175ac8cc-efac-4fd1-ba01-e224ab93757f" containerID="60c9cee0ff5bc0ea7d24689dd6d5f21ba5c1e5be8d35370e7d0139de92c299e1" exitCode=0 Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.424010 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76vwx" event={"ID":"175ac8cc-efac-4fd1-ba01-e224ab93757f","Type":"ContainerDied","Data":"60c9cee0ff5bc0ea7d24689dd6d5f21ba5c1e5be8d35370e7d0139de92c299e1"} Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.448405 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:42 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:42 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:42 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.448482 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.516022 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:42 crc kubenswrapper[4802]: E1004 04:48:42.516542 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.016516006 +0000 UTC m=+165.424516691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.617229 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:42 crc kubenswrapper[4802]: E1004 04:48:42.617685 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.117658666 +0000 UTC m=+165.525659301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.719403 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:42 crc kubenswrapper[4802]: E1004 04:48:42.719857 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.219840846 +0000 UTC m=+165.627841471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.821011 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:42 crc kubenswrapper[4802]: E1004 04:48:42.821473 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.321450399 +0000 UTC m=+165.729451034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:42 crc kubenswrapper[4802]: I1004 04:48:42.923299 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:42 crc kubenswrapper[4802]: E1004 04:48:42.923798 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.423779012 +0000 UTC m=+165.831779637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.024399 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.024635 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.524600543 +0000 UTC m=+165.932601208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.025313 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.027394 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.52732444 +0000 UTC m=+165.935325115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.127199 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.127522 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.627484052 +0000 UTC m=+166.035484677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.128042 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.128567 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.628542522 +0000 UTC m=+166.036543187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.229387 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.229637 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.72960232 +0000 UTC m=+166.137602945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.230196 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.230702 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.7306796 +0000 UTC m=+166.138680235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.331797 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.332048 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.831997835 +0000 UTC m=+166.239998500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.332129 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.332880 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.832852909 +0000 UTC m=+166.240853574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.432326 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-678bk" event={"ID":"a072d485-e16f-4778-8ca1-6b48bbc9f397","Type":"ContainerStarted","Data":"6bc6ef9569683432af1f865849de422a55e442459742a815f7fd454d5024ed41"} Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.432777 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.433065 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.933021271 +0000 UTC m=+166.341021916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.433202 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.433574 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:43.933558686 +0000 UTC m=+166.341559311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.434048 4802 generic.go:334] "Generic (PLEG): container finished" podID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" containerID="120ff22186badd6c4a1c2a721de20e31e24d72382ec0ec3877d97f18d830ec45" exitCode=0 Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.434085 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwg79" event={"ID":"9f9d61a9-b985-45da-bacf-c9fb55b52b66","Type":"ContainerDied","Data":"120ff22186badd6c4a1c2a721de20e31e24d72382ec0ec3877d97f18d830ec45"} Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.435790 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.445404 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:43 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:43 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:43 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.445503 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.534320 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.534589 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:44.034551892 +0000 UTC m=+166.442552517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.534888 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.536039 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:44.036015633 +0000 UTC m=+166.444016288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.642370 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.643444 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:44.143415718 +0000 UTC m=+166.551416343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.697301 4802 patch_prober.go:28] interesting pod/console-f9d7485db-6fwp2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.697375 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6fwp2" podUID="64860eca-743c-423a-8ee4-a1e5fd4f667d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.745024 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.745713 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:44.24569802 +0000 UTC m=+166.653698645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.846044 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.846720 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:44.346664776 +0000 UTC m=+166.754665411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.890623 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.890716 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.890792 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.890859 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:48:43 crc kubenswrapper[4802]: I1004 04:48:43.947227 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:43 crc kubenswrapper[4802]: E1004 04:48:43.948216 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:44.448188276 +0000 UTC m=+166.856188901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.048541 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.048822 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:44.548784442 +0000 UTC m=+166.956785067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.048913 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.049382 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:44.549352387 +0000 UTC m=+166.957353012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.087987 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.093347 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7qhp4" Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.152365 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.153237 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:44.653193173 +0000 UTC m=+167.061193798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.153789 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.161592 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:44.661551297 +0000 UTC m=+167.069552102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.255262 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.255775 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:44.755738223 +0000 UTC m=+167.163738868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.357011 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.357436 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:44.857413338 +0000 UTC m=+167.265413963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.441537 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" event={"ID":"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e","Type":"ContainerStarted","Data":"a0a5328ef1f326b0f72aacf247cbd1ac185c70e4458339ceb85aa6853fc84ac8"} Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.443482 4802 generic.go:334] "Generic (PLEG): container finished" podID="88d556c4-4775-463e-bfc7-c766fa10fce2" containerID="5adb20b92bc42def1f8e8a0c29f8e5092c17fafda2a18e2af5a0b004043c1e55" exitCode=0 Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.443597 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klg7m" event={"ID":"88d556c4-4775-463e-bfc7-c766fa10fce2","Type":"ContainerDied","Data":"5adb20b92bc42def1f8e8a0c29f8e5092c17fafda2a18e2af5a0b004043c1e55"} Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.445146 4802 generic.go:334] "Generic (PLEG): container finished" podID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" containerID="0fcc79bc419bb3b460319ad6d239305625dc2184567db0d3dac329b8cadab605" exitCode=0 Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.445207 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6c8r" event={"ID":"6dcaa0a7-ecad-45f4-9dec-510b38c25042","Type":"ContainerDied","Data":"0fcc79bc419bb3b460319ad6d239305625dc2184567db0d3dac329b8cadab605"} Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.446727 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n27xq" event={"ID":"0d189ff1-3446-47fe-bcea-6b09e72a4567","Type":"ContainerStarted","Data":"78fa79b7b67136e7d7a343bc1f355cb051a5eb8cafc51336c241f47637b56f1a"} Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.447356 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:44 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:44 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:44 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.447403 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.448582 4802 generic.go:334] "Generic (PLEG): container finished" podID="181c60ac-6236-4b41-9d35-4c897b437ae3" containerID="c68368f80d7eb138780a777dcc1ea59e5858ebe98df00c0718c0e67a5c742e2b" exitCode=0 Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.448689 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"181c60ac-6236-4b41-9d35-4c897b437ae3","Type":"ContainerDied","Data":"c68368f80d7eb138780a777dcc1ea59e5858ebe98df00c0718c0e67a5c742e2b"} Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.450122 4802 generic.go:334] "Generic (PLEG): container finished" podID="a072d485-e16f-4778-8ca1-6b48bbc9f397" containerID="6bc6ef9569683432af1f865849de422a55e442459742a815f7fd454d5024ed41" exitCode=0 Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.450148 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-678bk" event={"ID":"a072d485-e16f-4778-8ca1-6b48bbc9f397","Type":"ContainerDied","Data":"6bc6ef9569683432af1f865849de422a55e442459742a815f7fd454d5024ed41"} Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.458902 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.460407 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:44.960374979 +0000 UTC m=+167.368375704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.560236 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.560701 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.060681946 +0000 UTC m=+167.468682571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.661503 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.661876 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.161828296 +0000 UTC m=+167.569828941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.762880 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.763287 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.263272255 +0000 UTC m=+167.671272880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.864110 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.864332 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.364285891 +0000 UTC m=+167.772286516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.864657 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.865031 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.365020782 +0000 UTC m=+167.773021417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.965775 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.966071 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.466021708 +0000 UTC m=+167.874022343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:44 crc kubenswrapper[4802]: I1004 04:48:44.966152 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:44 crc kubenswrapper[4802]: E1004 04:48:44.966750 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.466732798 +0000 UTC m=+167.874733443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.067167 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:45 crc kubenswrapper[4802]: E1004 04:48:45.067334 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.567308152 +0000 UTC m=+167.975308787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.067458 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:45 crc kubenswrapper[4802]: E1004 04:48:45.067917 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.567901429 +0000 UTC m=+167.975902084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.168633 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:45 crc kubenswrapper[4802]: E1004 04:48:45.168856 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.668815613 +0000 UTC m=+168.076816268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.169097 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:45 crc kubenswrapper[4802]: E1004 04:48:45.169507 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.669494402 +0000 UTC m=+168.077495037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.269961 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:45 crc kubenswrapper[4802]: E1004 04:48:45.270238 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.77018675 +0000 UTC m=+168.178187385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.306223 4802 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.371658 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:45 crc kubenswrapper[4802]: E1004 04:48:45.372079 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.87205789 +0000 UTC m=+168.280058525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.447477 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:45 crc kubenswrapper[4802]: [-]has-synced failed: reason withheld Oct 04 04:48:45 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:45 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.447551 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.473753 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:45 crc kubenswrapper[4802]: E1004 04:48:45.473921 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.97388787 +0000 UTC m=+168.381888505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.474277 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:45 crc kubenswrapper[4802]: E1004 04:48:45.474711 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-04 04:48:45.974686272 +0000 UTC m=+168.382686907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-69tth" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.565509 4802 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-04T04:48:45.30629557Z","Handler":null,"Name":""} Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.575126 4802 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.575174 4802 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.575338 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.580910 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.676969 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.699398 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.862796 4802 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.863183 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.879840 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/181c60ac-6236-4b41-9d35-4c897b437ae3-kube-api-access\") pod \"181c60ac-6236-4b41-9d35-4c897b437ae3\" (UID: \"181c60ac-6236-4b41-9d35-4c897b437ae3\") " Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.880005 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/181c60ac-6236-4b41-9d35-4c897b437ae3-kubelet-dir\") pod \"181c60ac-6236-4b41-9d35-4c897b437ae3\" (UID: \"181c60ac-6236-4b41-9d35-4c897b437ae3\") " Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.880342 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/181c60ac-6236-4b41-9d35-4c897b437ae3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "181c60ac-6236-4b41-9d35-4c897b437ae3" (UID: "181c60ac-6236-4b41-9d35-4c897b437ae3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.886161 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181c60ac-6236-4b41-9d35-4c897b437ae3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "181c60ac-6236-4b41-9d35-4c897b437ae3" (UID: "181c60ac-6236-4b41-9d35-4c897b437ae3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.967193 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-69tth\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.981728 4802 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/181c60ac-6236-4b41-9d35-4c897b437ae3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 04 04:48:45 crc kubenswrapper[4802]: I1004 04:48:45.981777 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/181c60ac-6236-4b41-9d35-4c897b437ae3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 04 04:48:46 crc kubenswrapper[4802]: I1004 04:48:46.146029 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:46 crc kubenswrapper[4802]: I1004 04:48:46.369600 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 04 04:48:46 crc kubenswrapper[4802]: I1004 04:48:46.426568 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-69tth"] Oct 04 04:48:46 crc kubenswrapper[4802]: I1004 04:48:46.448345 4802 patch_prober.go:28] interesting pod/router-default-5444994796-kcw54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 04 04:48:46 crc kubenswrapper[4802]: [+]has-synced ok Oct 04 04:48:46 crc kubenswrapper[4802]: [+]process-running ok Oct 04 04:48:46 crc kubenswrapper[4802]: healthz check failed Oct 04 04:48:46 crc kubenswrapper[4802]: I1004 04:48:46.448416 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kcw54" podUID="0452346e-4ae6-4944-8203-fbf3c3273223" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 04 04:48:46 crc kubenswrapper[4802]: I1004 04:48:46.465840 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"181c60ac-6236-4b41-9d35-4c897b437ae3","Type":"ContainerDied","Data":"8be932c5646a9f21030f870396baf69b47d2fbbd2c2c54716b0cca6c5f79843c"} Oct 04 04:48:46 crc kubenswrapper[4802]: I1004 04:48:46.465889 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8be932c5646a9f21030f870396baf69b47d2fbbd2c2c54716b0cca6c5f79843c" Oct 04 04:48:46 crc kubenswrapper[4802]: I1004 04:48:46.465972 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 04 04:48:46 crc kubenswrapper[4802]: I1004 04:48:46.467754 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" event={"ID":"049a575e-6351-4aa3-89b0-395dd5dc7af5","Type":"ContainerStarted","Data":"3898148a7d0a0d30cadb41dc3607b30e4b5d43993ed70815f41950caf93badb0"} Oct 04 04:48:47 crc kubenswrapper[4802]: I1004 04:48:47.447772 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:47 crc kubenswrapper[4802]: I1004 04:48:47.454229 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kcw54" Oct 04 04:48:47 crc kubenswrapper[4802]: I1004 04:48:47.480753 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" event={"ID":"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e","Type":"ContainerStarted","Data":"3eb025ad6e281a409eeeb7db6f764cf16a4388b213038a3f577c40ebb9715669"} Oct 04 04:48:47 crc kubenswrapper[4802]: I1004 04:48:47.483064 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" event={"ID":"049a575e-6351-4aa3-89b0-395dd5dc7af5","Type":"ContainerStarted","Data":"0c7a541827b8d0f3573cde1fe33445ca287969f1cade7acf6ab2d0a168052b1c"} Oct 04 04:48:47 crc kubenswrapper[4802]: I1004 04:48:47.485999 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n27xq" event={"ID":"0d189ff1-3446-47fe-bcea-6b09e72a4567","Type":"ContainerStarted","Data":"5a9e0a64955be27bd6de70fb675c451d2b63832d270dfe0359387a5df0b7279e"} Oct 04 04:48:48 crc kubenswrapper[4802]: I1004 04:48:48.495693 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" event={"ID":"dcc8d0cb-faa8-4d06-9ec5-742a2f97a35e","Type":"ContainerStarted","Data":"9e6e4b7d10c0105629babcefff48302e185960c00164b28b5bfcb9622e3f6964"} Oct 04 04:48:48 crc kubenswrapper[4802]: I1004 04:48:48.497843 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:48:48 crc kubenswrapper[4802]: I1004 04:48:48.521057 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" podStartSLOduration=149.521037375 podStartE2EDuration="2m29.521037375s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:48.516789376 +0000 UTC m=+170.924790021" watchObservedRunningTime="2025-10-04 04:48:48.521037375 +0000 UTC m=+170.929037990" Oct 04 04:48:48 crc kubenswrapper[4802]: I1004 04:48:48.547781 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-n27xq" podStartSLOduration=149.547756453 podStartE2EDuration="2m29.547756453s" podCreationTimestamp="2025-10-04 04:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:48.544125921 +0000 UTC m=+170.952126546" watchObservedRunningTime="2025-10-04 04:48:48.547756453 +0000 UTC m=+170.955757078" Oct 04 04:48:52 crc kubenswrapper[4802]: I1004 04:48:52.662941 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 04:48:52 crc kubenswrapper[4802]: I1004 04:48:52.663328 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 04:48:53 crc kubenswrapper[4802]: I1004 04:48:53.701930 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:53 crc kubenswrapper[4802]: I1004 04:48:53.705728 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 04:48:53 crc kubenswrapper[4802]: I1004 04:48:53.730018 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-hqqtl" podStartSLOduration=31.729997823 podStartE2EDuration="31.729997823s" podCreationTimestamp="2025-10-04 04:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:48:49.525475622 +0000 UTC m=+171.933476267" watchObservedRunningTime="2025-10-04 04:48:53.729997823 +0000 UTC m=+176.137998448" Oct 04 04:48:53 crc kubenswrapper[4802]: I1004 04:48:53.890493 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:48:53 crc kubenswrapper[4802]: I1004 04:48:53.890500 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:48:53 crc kubenswrapper[4802]: I1004 04:48:53.890807 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:48:53 crc kubenswrapper[4802]: I1004 04:48:53.890922 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:48:53 crc kubenswrapper[4802]: I1004 04:48:53.890960 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-q9tg7" Oct 04 04:48:53 crc kubenswrapper[4802]: I1004 04:48:53.891625 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:48:53 crc kubenswrapper[4802]: I1004 04:48:53.891703 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:48:53 crc kubenswrapper[4802]: I1004 04:48:53.891751 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"0083c61f75fa9e638ece752de2e16d575836ff6dc0a084b43aa4572b962c2396"} pod="openshift-console/downloads-7954f5f757-q9tg7" containerMessage="Container download-server failed liveness probe, will be restarted" Oct 04 04:48:53 crc kubenswrapper[4802]: I1004 04:48:53.892252 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" containerID="cri-o://0083c61f75fa9e638ece752de2e16d575836ff6dc0a084b43aa4572b962c2396" gracePeriod=2 Oct 04 04:48:55 crc kubenswrapper[4802]: I1004 04:48:55.541559 4802 generic.go:334] "Generic (PLEG): container finished" podID="eee3cf4f-0b25-4641-865e-8f8101256453" containerID="0083c61f75fa9e638ece752de2e16d575836ff6dc0a084b43aa4572b962c2396" exitCode=0 Oct 04 04:48:55 crc kubenswrapper[4802]: I1004 04:48:55.541681 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-q9tg7" event={"ID":"eee3cf4f-0b25-4641-865e-8f8101256453","Type":"ContainerDied","Data":"0083c61f75fa9e638ece752de2e16d575836ff6dc0a084b43aa4572b962c2396"} Oct 04 04:49:03 crc kubenswrapper[4802]: I1004 04:49:03.889935 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:49:03 crc kubenswrapper[4802]: I1004 04:49:03.890400 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:49:04 crc kubenswrapper[4802]: I1004 04:49:04.137467 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-68zld" Oct 04 04:49:06 crc kubenswrapper[4802]: I1004 04:49:06.154312 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:49:07 crc kubenswrapper[4802]: I1004 04:49:07.135288 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 04 04:49:13 crc kubenswrapper[4802]: I1004 04:49:13.890072 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:49:13 crc kubenswrapper[4802]: I1004 04:49:13.890500 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:49:22 crc kubenswrapper[4802]: I1004 04:49:22.662530 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 04:49:22 crc kubenswrapper[4802]: I1004 04:49:22.663515 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 04:49:22 crc kubenswrapper[4802]: I1004 04:49:22.663598 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:49:22 crc kubenswrapper[4802]: I1004 04:49:22.664592 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 04:49:22 crc kubenswrapper[4802]: I1004 04:49:22.664728 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123" gracePeriod=600 Oct 04 04:49:23 crc kubenswrapper[4802]: I1004 04:49:23.892025 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:49:23 crc kubenswrapper[4802]: I1004 04:49:23.893188 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:49:28 crc kubenswrapper[4802]: I1004 04:49:28.221474 4802 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.472275808s: [/var/lib/containers/storage/overlay/fe2ae93d53d0659b35cd8455058c22d623e6103dc7b72bf4b358f430f8fdb0ad/diff /var/log/pods/openshift-authentication_oauth-openshift-558db77b4-ch4cq_ac6100d3-2668-4b1e-a78a-6f0703eca64a/oauth-openshift/0.log]; will not log again for this container unless duration exceeds 2s Oct 04 04:49:28 crc kubenswrapper[4802]: I1004 04:49:28.221474 4802 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.769140805s: [/var/lib/containers/storage/overlay/b8aa144771b83103de289531a99c56b32d4104888ff0cbe1c1283889831c6f5a/diff /var/log/pods/openshift-console-operator_console-operator-58897d9998-skrqg_a011efc4-8846-45fd-8f1a-27d5907889bf/console-operator/0.log]; will not log again for this container unless duration exceeds 2s Oct 04 04:49:30 crc kubenswrapper[4802]: I1004 04:49:30.760770 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123"} Oct 04 04:49:30 crc kubenswrapper[4802]: I1004 04:49:30.760742 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123" exitCode=0 Oct 04 04:49:33 crc kubenswrapper[4802]: I1004 04:49:33.890051 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:49:33 crc kubenswrapper[4802]: I1004 04:49:33.890134 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:49:43 crc kubenswrapper[4802]: I1004 04:49:43.889792 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:49:43 crc kubenswrapper[4802]: I1004 04:49:43.890388 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:49:49 crc kubenswrapper[4802]: E1004 04:49:49.334248 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 04 04:49:49 crc kubenswrapper[4802]: E1004 04:49:49.334809 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwmxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8frq9_openshift-marketplace(d30c5f5e-8390-4c6c-9dff-07157aa29319): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 04 04:49:49 crc kubenswrapper[4802]: E1004 04:49:49.336284 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8frq9" podUID="d30c5f5e-8390-4c6c-9dff-07157aa29319" Oct 04 04:49:50 crc kubenswrapper[4802]: E1004 04:49:50.063865 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage4271042247/3\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 04 04:49:50 crc kubenswrapper[4802]: E1004 04:49:50.064047 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8w4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-klg7m_openshift-marketplace(88d556c4-4775-463e-bfc7-c766fa10fce2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage4271042247/3\": happened during read: context canceled" logger="UnhandledError" Oct 04 04:49:50 crc kubenswrapper[4802]: E1004 04:49:50.065270 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage4271042247/3\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-operators-klg7m" podUID="88d556c4-4775-463e-bfc7-c766fa10fce2" Oct 04 04:49:53 crc kubenswrapper[4802]: I1004 04:49:53.890728 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:49:53 crc kubenswrapper[4802]: I1004 04:49:53.891124 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:50:03 crc kubenswrapper[4802]: I1004 04:50:03.891151 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:50:03 crc kubenswrapper[4802]: I1004 04:50:03.891891 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:50:13 crc kubenswrapper[4802]: I1004 04:50:13.890841 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:50:13 crc kubenswrapper[4802]: I1004 04:50:13.890924 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:50:14 crc kubenswrapper[4802]: E1004 04:50:14.140859 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 04 04:50:14 crc kubenswrapper[4802]: E1004 04:50:14.141265 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5s72b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wkmdf_openshift-marketplace(01bacf77-5ec5-42c6-af1a-d27ffc2f26e3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 04 04:50:14 crc kubenswrapper[4802]: E1004 04:50:14.142759 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wkmdf" podUID="01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" Oct 04 04:50:17 crc kubenswrapper[4802]: E1004 04:50:17.793697 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1835456826/2\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 04 04:50:17 crc kubenswrapper[4802]: E1004 04:50:17.793899 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ngxwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-678bk_openshift-marketplace(a072d485-e16f-4778-8ca1-6b48bbc9f397): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1835456826/2\": happened during read: context canceled" logger="UnhandledError" Oct 04 04:50:17 crc kubenswrapper[4802]: E1004 04:50:17.795149 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage1835456826/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-marketplace-678bk" podUID="a072d485-e16f-4778-8ca1-6b48bbc9f397" Oct 04 04:50:23 crc kubenswrapper[4802]: I1004 04:50:23.890081 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:50:23 crc kubenswrapper[4802]: I1004 04:50:23.890482 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:50:30 crc kubenswrapper[4802]: E1004 04:50:30.161204 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 04 04:50:30 crc kubenswrapper[4802]: E1004 04:50:30.162234 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdk4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-m6c8r_openshift-marketplace(6dcaa0a7-ecad-45f4-9dec-510b38c25042): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 04 04:50:30 crc kubenswrapper[4802]: E1004 04:50:30.163520 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-m6c8r" podUID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" Oct 04 04:50:30 crc kubenswrapper[4802]: E1004 04:50:30.180322 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 04 04:50:30 crc kubenswrapper[4802]: E1004 04:50:30.180759 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn2r9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-76vwx_openshift-marketplace(175ac8cc-efac-4fd1-ba01-e224ab93757f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 04 04:50:30 crc kubenswrapper[4802]: E1004 04:50:30.182375 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-76vwx" podUID="175ac8cc-efac-4fd1-ba01-e224ab93757f" Oct 04 04:50:30 crc kubenswrapper[4802]: E1004 04:50:30.546813 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 04 04:50:30 crc kubenswrapper[4802]: E1004 04:50:30.547031 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xg7m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bq9qt_openshift-marketplace(c067018f-e13d-4b01-a1f7-49528fcd6397): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 04 04:50:30 crc kubenswrapper[4802]: E1004 04:50:30.548582 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bq9qt" podUID="c067018f-e13d-4b01-a1f7-49528fcd6397" Oct 04 04:50:33 crc kubenswrapper[4802]: I1004 04:50:33.890321 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:50:33 crc kubenswrapper[4802]: I1004 04:50:33.890872 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:50:34 crc kubenswrapper[4802]: E1004 04:50:34.935025 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-m6c8r" podUID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" Oct 04 04:50:34 crc kubenswrapper[4802]: E1004 04:50:34.935092 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bq9qt" podUID="c067018f-e13d-4b01-a1f7-49528fcd6397" Oct 04 04:50:34 crc kubenswrapper[4802]: E1004 04:50:34.940119 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-76vwx" podUID="175ac8cc-efac-4fd1-ba01-e224ab93757f" Oct 04 04:50:36 crc kubenswrapper[4802]: E1004 04:50:36.272617 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 04 04:50:36 crc kubenswrapper[4802]: E1004 04:50:36.273044 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pnhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qwg79_openshift-marketplace(9f9d61a9-b985-45da-bacf-c9fb55b52b66): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 04 04:50:36 crc kubenswrapper[4802]: E1004 04:50:36.274132 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qwg79" podUID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" Oct 04 04:50:37 crc kubenswrapper[4802]: E1004 04:50:37.182056 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qwg79" podUID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" Oct 04 04:50:38 crc kubenswrapper[4802]: I1004 04:50:38.188224 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-q9tg7" event={"ID":"eee3cf4f-0b25-4641-865e-8f8101256453","Type":"ContainerStarted","Data":"64c653648d95e04f5ab18f96ba41c4cd41db13e72c6583f3b4f7165e81c01fcf"} Oct 04 04:50:38 crc kubenswrapper[4802]: I1004 04:50:38.188554 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-q9tg7" Oct 04 04:50:38 crc kubenswrapper[4802]: I1004 04:50:38.189447 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:50:38 crc kubenswrapper[4802]: I1004 04:50:38.189551 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:50:38 crc kubenswrapper[4802]: I1004 04:50:38.194453 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"5d0fec1919ce376bb23a83cbe1bd76cccaab831eee4aaa8ba10e1c4573fa8eff"} Oct 04 04:50:39 crc kubenswrapper[4802]: I1004 04:50:39.200190 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:50:39 crc kubenswrapper[4802]: I1004 04:50:39.201567 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:50:41 crc kubenswrapper[4802]: I1004 04:50:41.215217 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-678bk" event={"ID":"a072d485-e16f-4778-8ca1-6b48bbc9f397","Type":"ContainerStarted","Data":"d0d6d36b939e0d07b91e4eec0618bd81d7cbb277057453de207d01ec12d94681"} Oct 04 04:50:41 crc kubenswrapper[4802]: I1004 04:50:41.219439 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkmdf" event={"ID":"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3","Type":"ContainerStarted","Data":"568bd0471a6bab648d58daeb7111f8a68b2361dce5c13d9d7249c86234345b18"} Oct 04 04:50:41 crc kubenswrapper[4802]: I1004 04:50:41.222165 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8frq9" event={"ID":"d30c5f5e-8390-4c6c-9dff-07157aa29319","Type":"ContainerStarted","Data":"a16887d23b3b4a23ee89d37e54ab5faef674e68f985d681b3da982a450b01b6d"} Oct 04 04:50:41 crc kubenswrapper[4802]: I1004 04:50:41.225198 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klg7m" event={"ID":"88d556c4-4775-463e-bfc7-c766fa10fce2","Type":"ContainerStarted","Data":"dd224311c6732f0253c2c0cabaf2476a7628101d8cd57062c96db921b42bf078"} Oct 04 04:50:42 crc kubenswrapper[4802]: I1004 04:50:42.233503 4802 generic.go:334] "Generic (PLEG): container finished" podID="d30c5f5e-8390-4c6c-9dff-07157aa29319" containerID="a16887d23b3b4a23ee89d37e54ab5faef674e68f985d681b3da982a450b01b6d" exitCode=0 Oct 04 04:50:42 crc kubenswrapper[4802]: I1004 04:50:42.233677 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8frq9" event={"ID":"d30c5f5e-8390-4c6c-9dff-07157aa29319","Type":"ContainerDied","Data":"a16887d23b3b4a23ee89d37e54ab5faef674e68f985d681b3da982a450b01b6d"} Oct 04 04:50:42 crc kubenswrapper[4802]: I1004 04:50:42.237359 4802 generic.go:334] "Generic (PLEG): container finished" podID="88d556c4-4775-463e-bfc7-c766fa10fce2" containerID="dd224311c6732f0253c2c0cabaf2476a7628101d8cd57062c96db921b42bf078" exitCode=0 Oct 04 04:50:42 crc kubenswrapper[4802]: I1004 04:50:42.237455 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klg7m" event={"ID":"88d556c4-4775-463e-bfc7-c766fa10fce2","Type":"ContainerDied","Data":"dd224311c6732f0253c2c0cabaf2476a7628101d8cd57062c96db921b42bf078"} Oct 04 04:50:42 crc kubenswrapper[4802]: I1004 04:50:42.241802 4802 generic.go:334] "Generic (PLEG): container finished" podID="a072d485-e16f-4778-8ca1-6b48bbc9f397" containerID="d0d6d36b939e0d07b91e4eec0618bd81d7cbb277057453de207d01ec12d94681" exitCode=0 Oct 04 04:50:42 crc kubenswrapper[4802]: I1004 04:50:42.241881 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-678bk" event={"ID":"a072d485-e16f-4778-8ca1-6b48bbc9f397","Type":"ContainerDied","Data":"d0d6d36b939e0d07b91e4eec0618bd81d7cbb277057453de207d01ec12d94681"} Oct 04 04:50:42 crc kubenswrapper[4802]: I1004 04:50:42.243916 4802 generic.go:334] "Generic (PLEG): container finished" podID="01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" containerID="568bd0471a6bab648d58daeb7111f8a68b2361dce5c13d9d7249c86234345b18" exitCode=0 Oct 04 04:50:42 crc kubenswrapper[4802]: I1004 04:50:42.243977 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkmdf" event={"ID":"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3","Type":"ContainerDied","Data":"568bd0471a6bab648d58daeb7111f8a68b2361dce5c13d9d7249c86234345b18"} Oct 04 04:50:43 crc kubenswrapper[4802]: I1004 04:50:43.890498 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:50:43 crc kubenswrapper[4802]: I1004 04:50:43.890928 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:50:43 crc kubenswrapper[4802]: I1004 04:50:43.890506 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:50:43 crc kubenswrapper[4802]: I1004 04:50:43.891081 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:50:53 crc kubenswrapper[4802]: I1004 04:50:53.889865 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:50:53 crc kubenswrapper[4802]: I1004 04:50:53.890492 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:50:53 crc kubenswrapper[4802]: I1004 04:50:53.889942 4802 patch_prober.go:28] interesting pod/downloads-7954f5f757-q9tg7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Oct 04 04:50:53 crc kubenswrapper[4802]: I1004 04:50:53.890588 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-q9tg7" podUID="eee3cf4f-0b25-4641-865e-8f8101256453" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Oct 04 04:51:02 crc kubenswrapper[4802]: I1004 04:51:02.401362 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klg7m" event={"ID":"88d556c4-4775-463e-bfc7-c766fa10fce2","Type":"ContainerStarted","Data":"d64071c0113e964196f57ba09a53426f83c23439cf9098bf6962dbc65ce8ea85"} Oct 04 04:51:02 crc kubenswrapper[4802]: I1004 04:51:02.405316 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-678bk" event={"ID":"a072d485-e16f-4778-8ca1-6b48bbc9f397","Type":"ContainerStarted","Data":"1ab53c9a4b645544756a823c760afb590279f89cbc9ab47e434141d95282c4f1"} Oct 04 04:51:02 crc kubenswrapper[4802]: I1004 04:51:02.408631 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkmdf" event={"ID":"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3","Type":"ContainerStarted","Data":"54775531b26088dd8af2f470830173c549f984045b669351cf91dd499419a4c3"} Oct 04 04:51:02 crc kubenswrapper[4802]: I1004 04:51:02.411360 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8frq9" event={"ID":"d30c5f5e-8390-4c6c-9dff-07157aa29319","Type":"ContainerStarted","Data":"3553a11c018156795c8c3758eecfea97608cd9aed2eff8f388029bab49052397"} Oct 04 04:51:02 crc kubenswrapper[4802]: I1004 04:51:02.427964 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-klg7m" podStartSLOduration=11.421271643 podStartE2EDuration="2m23.427948654s" podCreationTimestamp="2025-10-04 04:48:39 +0000 UTC" firstStartedPulling="2025-10-04 04:48:45.517258603 +0000 UTC m=+167.925259248" lastFinishedPulling="2025-10-04 04:50:57.523935624 +0000 UTC m=+299.931936259" observedRunningTime="2025-10-04 04:51:02.425709941 +0000 UTC m=+304.833710576" watchObservedRunningTime="2025-10-04 04:51:02.427948654 +0000 UTC m=+304.835949279" Oct 04 04:51:03 crc kubenswrapper[4802]: I1004 04:51:03.445982 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8frq9" podStartSLOduration=12.165380495 podStartE2EDuration="2m27.44595458s" podCreationTimestamp="2025-10-04 04:48:36 +0000 UTC" firstStartedPulling="2025-10-04 04:48:44.451986764 +0000 UTC m=+166.859987389" lastFinishedPulling="2025-10-04 04:50:59.732560809 +0000 UTC m=+302.140561474" observedRunningTime="2025-10-04 04:51:03.439130637 +0000 UTC m=+305.847131302" watchObservedRunningTime="2025-10-04 04:51:03.44595458 +0000 UTC m=+305.853955235" Oct 04 04:51:03 crc kubenswrapper[4802]: I1004 04:51:03.910149 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-q9tg7" Oct 04 04:51:03 crc kubenswrapper[4802]: I1004 04:51:03.926536 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wkmdf" podStartSLOduration=14.298321426 podStartE2EDuration="2m28.926509615s" podCreationTimestamp="2025-10-04 04:48:35 +0000 UTC" firstStartedPulling="2025-10-04 04:48:44.451966864 +0000 UTC m=+166.859967489" lastFinishedPulling="2025-10-04 04:50:59.080155043 +0000 UTC m=+301.488155678" observedRunningTime="2025-10-04 04:51:03.464910425 +0000 UTC m=+305.872911170" watchObservedRunningTime="2025-10-04 04:51:03.926509615 +0000 UTC m=+306.334510240" Oct 04 04:51:04 crc kubenswrapper[4802]: I1004 04:51:04.449668 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-678bk" podStartSLOduration=10.523870716 podStartE2EDuration="2m26.449617962s" podCreationTimestamp="2025-10-04 04:48:38 +0000 UTC" firstStartedPulling="2025-10-04 04:48:44.452220951 +0000 UTC m=+166.860221586" lastFinishedPulling="2025-10-04 04:51:00.377968157 +0000 UTC m=+302.785968832" observedRunningTime="2025-10-04 04:51:04.447934114 +0000 UTC m=+306.855934759" watchObservedRunningTime="2025-10-04 04:51:04.449617962 +0000 UTC m=+306.857618607" Oct 04 04:51:06 crc kubenswrapper[4802]: I1004 04:51:06.202122 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:51:06 crc kubenswrapper[4802]: I1004 04:51:06.204204 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:51:06 crc kubenswrapper[4802]: I1004 04:51:06.380422 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:51:06 crc kubenswrapper[4802]: I1004 04:51:06.380477 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:51:07 crc kubenswrapper[4802]: I1004 04:51:07.171543 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:51:07 crc kubenswrapper[4802]: I1004 04:51:07.172920 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:51:07 crc kubenswrapper[4802]: I1004 04:51:07.223850 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:51:07 crc kubenswrapper[4802]: I1004 04:51:07.236569 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:51:08 crc kubenswrapper[4802]: I1004 04:51:08.561590 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:51:08 crc kubenswrapper[4802]: I1004 04:51:08.562844 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:51:08 crc kubenswrapper[4802]: I1004 04:51:08.634449 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:51:09 crc kubenswrapper[4802]: I1004 04:51:09.490451 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:51:09 crc kubenswrapper[4802]: I1004 04:51:09.562093 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:51:09 crc kubenswrapper[4802]: I1004 04:51:09.562158 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:51:09 crc kubenswrapper[4802]: I1004 04:51:09.602997 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:51:10 crc kubenswrapper[4802]: I1004 04:51:10.511596 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:51:11 crc kubenswrapper[4802]: I1004 04:51:11.456442 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-678bk"] Oct 04 04:51:11 crc kubenswrapper[4802]: I1004 04:51:11.464960 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-678bk" podUID="a072d485-e16f-4778-8ca1-6b48bbc9f397" containerName="registry-server" containerID="cri-o://1ab53c9a4b645544756a823c760afb590279f89cbc9ab47e434141d95282c4f1" gracePeriod=2 Oct 04 04:51:14 crc kubenswrapper[4802]: I1004 04:51:14.503783 4802 generic.go:334] "Generic (PLEG): container finished" podID="a072d485-e16f-4778-8ca1-6b48bbc9f397" containerID="1ab53c9a4b645544756a823c760afb590279f89cbc9ab47e434141d95282c4f1" exitCode=0 Oct 04 04:51:14 crc kubenswrapper[4802]: I1004 04:51:14.503871 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-678bk" event={"ID":"a072d485-e16f-4778-8ca1-6b48bbc9f397","Type":"ContainerDied","Data":"1ab53c9a4b645544756a823c760afb590279f89cbc9ab47e434141d95282c4f1"} Oct 04 04:51:15 crc kubenswrapper[4802]: I1004 04:51:15.845289 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:51:15 crc kubenswrapper[4802]: I1004 04:51:15.973534 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a072d485-e16f-4778-8ca1-6b48bbc9f397-catalog-content\") pod \"a072d485-e16f-4778-8ca1-6b48bbc9f397\" (UID: \"a072d485-e16f-4778-8ca1-6b48bbc9f397\") " Oct 04 04:51:15 crc kubenswrapper[4802]: I1004 04:51:15.973984 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngxwh\" (UniqueName: \"kubernetes.io/projected/a072d485-e16f-4778-8ca1-6b48bbc9f397-kube-api-access-ngxwh\") pod \"a072d485-e16f-4778-8ca1-6b48bbc9f397\" (UID: \"a072d485-e16f-4778-8ca1-6b48bbc9f397\") " Oct 04 04:51:15 crc kubenswrapper[4802]: I1004 04:51:15.974039 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a072d485-e16f-4778-8ca1-6b48bbc9f397-utilities\") pod \"a072d485-e16f-4778-8ca1-6b48bbc9f397\" (UID: \"a072d485-e16f-4778-8ca1-6b48bbc9f397\") " Oct 04 04:51:15 crc kubenswrapper[4802]: I1004 04:51:15.975023 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a072d485-e16f-4778-8ca1-6b48bbc9f397-utilities" (OuterVolumeSpecName: "utilities") pod "a072d485-e16f-4778-8ca1-6b48bbc9f397" (UID: "a072d485-e16f-4778-8ca1-6b48bbc9f397"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:51:15 crc kubenswrapper[4802]: I1004 04:51:15.987995 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a072d485-e16f-4778-8ca1-6b48bbc9f397-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a072d485-e16f-4778-8ca1-6b48bbc9f397" (UID: "a072d485-e16f-4778-8ca1-6b48bbc9f397"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:51:15 crc kubenswrapper[4802]: I1004 04:51:15.988223 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a072d485-e16f-4778-8ca1-6b48bbc9f397-kube-api-access-ngxwh" (OuterVolumeSpecName: "kube-api-access-ngxwh") pod "a072d485-e16f-4778-8ca1-6b48bbc9f397" (UID: "a072d485-e16f-4778-8ca1-6b48bbc9f397"). InnerVolumeSpecName "kube-api-access-ngxwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:51:16 crc kubenswrapper[4802]: I1004 04:51:16.075254 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a072d485-e16f-4778-8ca1-6b48bbc9f397-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 04:51:16 crc kubenswrapper[4802]: I1004 04:51:16.075299 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngxwh\" (UniqueName: \"kubernetes.io/projected/a072d485-e16f-4778-8ca1-6b48bbc9f397-kube-api-access-ngxwh\") on node \"crc\" DevicePath \"\"" Oct 04 04:51:16 crc kubenswrapper[4802]: I1004 04:51:16.075313 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a072d485-e16f-4778-8ca1-6b48bbc9f397-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 04:51:16 crc kubenswrapper[4802]: I1004 04:51:16.517366 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-678bk" event={"ID":"a072d485-e16f-4778-8ca1-6b48bbc9f397","Type":"ContainerDied","Data":"aadaecee8b77482a2240b7ee8d83c25f7866680ae68459a2a84f037a14976018"} Oct 04 04:51:16 crc kubenswrapper[4802]: I1004 04:51:16.517409 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-678bk" Oct 04 04:51:16 crc kubenswrapper[4802]: I1004 04:51:16.517432 4802 scope.go:117] "RemoveContainer" containerID="1ab53c9a4b645544756a823c760afb590279f89cbc9ab47e434141d95282c4f1" Oct 04 04:51:16 crc kubenswrapper[4802]: I1004 04:51:16.536109 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-678bk"] Oct 04 04:51:16 crc kubenswrapper[4802]: I1004 04:51:16.537336 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-678bk"] Oct 04 04:51:17 crc kubenswrapper[4802]: I1004 04:51:17.647067 4802 scope.go:117] "RemoveContainer" containerID="d0d6d36b939e0d07b91e4eec0618bd81d7cbb277057453de207d01ec12d94681" Oct 04 04:51:18 crc kubenswrapper[4802]: I1004 04:51:18.105865 4802 scope.go:117] "RemoveContainer" containerID="6bc6ef9569683432af1f865849de422a55e442459742a815f7fd454d5024ed41" Oct 04 04:51:18 crc kubenswrapper[4802]: I1004 04:51:18.369179 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a072d485-e16f-4778-8ca1-6b48bbc9f397" path="/var/lib/kubelet/pods/a072d485-e16f-4778-8ca1-6b48bbc9f397/volumes" Oct 04 04:51:20 crc kubenswrapper[4802]: I1004 04:51:20.544680 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwg79" event={"ID":"9f9d61a9-b985-45da-bacf-c9fb55b52b66","Type":"ContainerStarted","Data":"a0a3072b8e422c11197058b05645c24f7359ed3ab1866cd689c27590030e083f"} Oct 04 04:51:20 crc kubenswrapper[4802]: I1004 04:51:20.548176 4802 generic.go:334] "Generic (PLEG): container finished" podID="175ac8cc-efac-4fd1-ba01-e224ab93757f" containerID="2608e68413b4bc3463e42a47748c1d9b860a65eead9b73536a0c00e783f9e21f" exitCode=0 Oct 04 04:51:20 crc kubenswrapper[4802]: I1004 04:51:20.548237 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76vwx" event={"ID":"175ac8cc-efac-4fd1-ba01-e224ab93757f","Type":"ContainerDied","Data":"2608e68413b4bc3463e42a47748c1d9b860a65eead9b73536a0c00e783f9e21f"} Oct 04 04:51:21 crc kubenswrapper[4802]: I1004 04:51:21.556019 4802 generic.go:334] "Generic (PLEG): container finished" podID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" containerID="a0a3072b8e422c11197058b05645c24f7359ed3ab1866cd689c27590030e083f" exitCode=0 Oct 04 04:51:21 crc kubenswrapper[4802]: I1004 04:51:21.556098 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwg79" event={"ID":"9f9d61a9-b985-45da-bacf-c9fb55b52b66","Type":"ContainerDied","Data":"a0a3072b8e422c11197058b05645c24f7359ed3ab1866cd689c27590030e083f"} Oct 04 04:51:21 crc kubenswrapper[4802]: I1004 04:51:21.560697 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6c8r" event={"ID":"6dcaa0a7-ecad-45f4-9dec-510b38c25042","Type":"ContainerStarted","Data":"78df99847baa78ec1d5ae1cfb622148c098d128ec916e39e8ff4dc53fb6493c1"} Oct 04 04:51:21 crc kubenswrapper[4802]: I1004 04:51:21.563437 4802 generic.go:334] "Generic (PLEG): container finished" podID="c067018f-e13d-4b01-a1f7-49528fcd6397" containerID="2cea264165872babb330a1a95d45e8a0fa90e754db861e750aacbdc888188007" exitCode=0 Oct 04 04:51:21 crc kubenswrapper[4802]: I1004 04:51:21.563537 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9qt" event={"ID":"c067018f-e13d-4b01-a1f7-49528fcd6397","Type":"ContainerDied","Data":"2cea264165872babb330a1a95d45e8a0fa90e754db861e750aacbdc888188007"} Oct 04 04:51:22 crc kubenswrapper[4802]: I1004 04:51:22.571789 4802 generic.go:334] "Generic (PLEG): container finished" podID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" containerID="78df99847baa78ec1d5ae1cfb622148c098d128ec916e39e8ff4dc53fb6493c1" exitCode=0 Oct 04 04:51:22 crc kubenswrapper[4802]: I1004 04:51:22.571855 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6c8r" event={"ID":"6dcaa0a7-ecad-45f4-9dec-510b38c25042","Type":"ContainerDied","Data":"78df99847baa78ec1d5ae1cfb622148c098d128ec916e39e8ff4dc53fb6493c1"} Oct 04 04:51:41 crc kubenswrapper[4802]: I1004 04:51:41.684689 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwg79" event={"ID":"9f9d61a9-b985-45da-bacf-c9fb55b52b66","Type":"ContainerStarted","Data":"92edb50f29445dae3dea0cb807f1e2dde3d30951e66d203dbbe7ba63cac36c0f"} Oct 04 04:51:41 crc kubenswrapper[4802]: I1004 04:51:41.688235 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6c8r" event={"ID":"6dcaa0a7-ecad-45f4-9dec-510b38c25042","Type":"ContainerStarted","Data":"e841884760a30c9f23a0811931888013d14a8d7d690b647375b7f6ad42068294"} Oct 04 04:51:41 crc kubenswrapper[4802]: I1004 04:51:41.691312 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9qt" event={"ID":"c067018f-e13d-4b01-a1f7-49528fcd6397","Type":"ContainerStarted","Data":"4b431cece9c1309077eaa521277f059cf189992e9b5be425248d4cbe1a6fe905"} Oct 04 04:51:41 crc kubenswrapper[4802]: I1004 04:51:41.693837 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76vwx" event={"ID":"175ac8cc-efac-4fd1-ba01-e224ab93757f","Type":"ContainerStarted","Data":"2ebb4c4727665b559e11bdb619b4d194da19ae9838d68d0f39a4657b2f6a106f"} Oct 04 04:51:41 crc kubenswrapper[4802]: I1004 04:51:41.705895 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qwg79" podStartSLOduration=8.693558685 podStartE2EDuration="3m4.705878522s" podCreationTimestamp="2025-10-04 04:48:37 +0000 UTC" firstStartedPulling="2025-10-04 04:48:44.452226301 +0000 UTC m=+166.860226926" lastFinishedPulling="2025-10-04 04:51:40.464546138 +0000 UTC m=+342.872546763" observedRunningTime="2025-10-04 04:51:41.702522701 +0000 UTC m=+344.110523326" watchObservedRunningTime="2025-10-04 04:51:41.705878522 +0000 UTC m=+344.113879147" Oct 04 04:51:41 crc kubenswrapper[4802]: I1004 04:51:41.722033 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m6c8r" podStartSLOduration=8.197024425 podStartE2EDuration="3m2.722003204s" podCreationTimestamp="2025-10-04 04:48:39 +0000 UTC" firstStartedPulling="2025-10-04 04:48:45.517299054 +0000 UTC m=+167.925299679" lastFinishedPulling="2025-10-04 04:51:40.042277783 +0000 UTC m=+342.450278458" observedRunningTime="2025-10-04 04:51:41.71620464 +0000 UTC m=+344.124205275" watchObservedRunningTime="2025-10-04 04:51:41.722003204 +0000 UTC m=+344.130003849" Oct 04 04:51:42 crc kubenswrapper[4802]: I1004 04:51:42.729411 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bq9qt" podStartSLOduration=10.122671141 podStartE2EDuration="3m6.729394812s" podCreationTimestamp="2025-10-04 04:48:36 +0000 UTC" firstStartedPulling="2025-10-04 04:48:43.435441709 +0000 UTC m=+165.843442334" lastFinishedPulling="2025-10-04 04:51:40.04216538 +0000 UTC m=+342.450166005" observedRunningTime="2025-10-04 04:51:42.727824135 +0000 UTC m=+345.135824770" watchObservedRunningTime="2025-10-04 04:51:42.729394812 +0000 UTC m=+345.137395437" Oct 04 04:51:43 crc kubenswrapper[4802]: I1004 04:51:43.728789 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-76vwx" podStartSLOduration=21.596191555 podStartE2EDuration="3m7.728762761s" podCreationTimestamp="2025-10-04 04:48:36 +0000 UTC" firstStartedPulling="2025-10-04 04:48:44.452250422 +0000 UTC m=+166.860251047" lastFinishedPulling="2025-10-04 04:51:30.584821628 +0000 UTC m=+332.992822253" observedRunningTime="2025-10-04 04:51:43.725115752 +0000 UTC m=+346.133116397" watchObservedRunningTime="2025-10-04 04:51:43.728762761 +0000 UTC m=+346.136763406" Oct 04 04:51:46 crc kubenswrapper[4802]: I1004 04:51:46.569584 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:51:46 crc kubenswrapper[4802]: I1004 04:51:46.569711 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:51:46 crc kubenswrapper[4802]: I1004 04:51:46.608779 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:51:46 crc kubenswrapper[4802]: I1004 04:51:46.773106 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:51:46 crc kubenswrapper[4802]: I1004 04:51:46.773572 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:51:46 crc kubenswrapper[4802]: I1004 04:51:46.774728 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:51:46 crc kubenswrapper[4802]: I1004 04:51:46.820552 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:51:47 crc kubenswrapper[4802]: I1004 04:51:47.765174 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:51:48 crc kubenswrapper[4802]: I1004 04:51:48.166636 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:51:48 crc kubenswrapper[4802]: I1004 04:51:48.169222 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:51:48 crc kubenswrapper[4802]: I1004 04:51:48.220812 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:51:48 crc kubenswrapper[4802]: I1004 04:51:48.771375 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:51:49 crc kubenswrapper[4802]: I1004 04:51:49.724376 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bq9qt"] Oct 04 04:51:49 crc kubenswrapper[4802]: I1004 04:51:49.725471 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bq9qt" podUID="c067018f-e13d-4b01-a1f7-49528fcd6397" containerName="registry-server" containerID="cri-o://4b431cece9c1309077eaa521277f059cf189992e9b5be425248d4cbe1a6fe905" gracePeriod=2 Oct 04 04:51:49 crc kubenswrapper[4802]: I1004 04:51:49.990811 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:51:49 crc kubenswrapper[4802]: I1004 04:51:49.990862 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.034426 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.106386 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.149760 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg7m2\" (UniqueName: \"kubernetes.io/projected/c067018f-e13d-4b01-a1f7-49528fcd6397-kube-api-access-xg7m2\") pod \"c067018f-e13d-4b01-a1f7-49528fcd6397\" (UID: \"c067018f-e13d-4b01-a1f7-49528fcd6397\") " Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.155305 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c067018f-e13d-4b01-a1f7-49528fcd6397-kube-api-access-xg7m2" (OuterVolumeSpecName: "kube-api-access-xg7m2") pod "c067018f-e13d-4b01-a1f7-49528fcd6397" (UID: "c067018f-e13d-4b01-a1f7-49528fcd6397"). InnerVolumeSpecName "kube-api-access-xg7m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.250421 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c067018f-e13d-4b01-a1f7-49528fcd6397-catalog-content\") pod \"c067018f-e13d-4b01-a1f7-49528fcd6397\" (UID: \"c067018f-e13d-4b01-a1f7-49528fcd6397\") " Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.250482 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c067018f-e13d-4b01-a1f7-49528fcd6397-utilities\") pod \"c067018f-e13d-4b01-a1f7-49528fcd6397\" (UID: \"c067018f-e13d-4b01-a1f7-49528fcd6397\") " Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.250764 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg7m2\" (UniqueName: \"kubernetes.io/projected/c067018f-e13d-4b01-a1f7-49528fcd6397-kube-api-access-xg7m2\") on node \"crc\" DevicePath \"\"" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.251560 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c067018f-e13d-4b01-a1f7-49528fcd6397-utilities" (OuterVolumeSpecName: "utilities") pod "c067018f-e13d-4b01-a1f7-49528fcd6397" (UID: "c067018f-e13d-4b01-a1f7-49528fcd6397"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.295496 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c067018f-e13d-4b01-a1f7-49528fcd6397-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c067018f-e13d-4b01-a1f7-49528fcd6397" (UID: "c067018f-e13d-4b01-a1f7-49528fcd6397"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.321142 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-76vwx"] Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.351916 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c067018f-e13d-4b01-a1f7-49528fcd6397-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.351959 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c067018f-e13d-4b01-a1f7-49528fcd6397-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.744675 4802 generic.go:334] "Generic (PLEG): container finished" podID="c067018f-e13d-4b01-a1f7-49528fcd6397" containerID="4b431cece9c1309077eaa521277f059cf189992e9b5be425248d4cbe1a6fe905" exitCode=0 Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.744745 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9qt" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.744777 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9qt" event={"ID":"c067018f-e13d-4b01-a1f7-49528fcd6397","Type":"ContainerDied","Data":"4b431cece9c1309077eaa521277f059cf189992e9b5be425248d4cbe1a6fe905"} Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.744858 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9qt" event={"ID":"c067018f-e13d-4b01-a1f7-49528fcd6397","Type":"ContainerDied","Data":"ad1d754f3ad6236c5d56b5b4b780336e09e14f1e5e359332f71220efe4aa840c"} Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.744888 4802 scope.go:117] "RemoveContainer" containerID="4b431cece9c1309077eaa521277f059cf189992e9b5be425248d4cbe1a6fe905" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.745296 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-76vwx" podUID="175ac8cc-efac-4fd1-ba01-e224ab93757f" containerName="registry-server" containerID="cri-o://2ebb4c4727665b559e11bdb619b4d194da19ae9838d68d0f39a4657b2f6a106f" gracePeriod=2 Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.771281 4802 scope.go:117] "RemoveContainer" containerID="2cea264165872babb330a1a95d45e8a0fa90e754db861e750aacbdc888188007" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.773782 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bq9qt"] Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.779118 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bq9qt"] Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.790345 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.798015 4802 scope.go:117] "RemoveContainer" containerID="febd2030d0b49b82e0e08295982121ef08ddffb05f603f34a39462049773da01" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.825590 4802 scope.go:117] "RemoveContainer" containerID="4b431cece9c1309077eaa521277f059cf189992e9b5be425248d4cbe1a6fe905" Oct 04 04:51:50 crc kubenswrapper[4802]: E1004 04:51:50.826167 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b431cece9c1309077eaa521277f059cf189992e9b5be425248d4cbe1a6fe905\": container with ID starting with 4b431cece9c1309077eaa521277f059cf189992e9b5be425248d4cbe1a6fe905 not found: ID does not exist" containerID="4b431cece9c1309077eaa521277f059cf189992e9b5be425248d4cbe1a6fe905" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.826212 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b431cece9c1309077eaa521277f059cf189992e9b5be425248d4cbe1a6fe905"} err="failed to get container status \"4b431cece9c1309077eaa521277f059cf189992e9b5be425248d4cbe1a6fe905\": rpc error: code = NotFound desc = could not find container \"4b431cece9c1309077eaa521277f059cf189992e9b5be425248d4cbe1a6fe905\": container with ID starting with 4b431cece9c1309077eaa521277f059cf189992e9b5be425248d4cbe1a6fe905 not found: ID does not exist" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.826243 4802 scope.go:117] "RemoveContainer" containerID="2cea264165872babb330a1a95d45e8a0fa90e754db861e750aacbdc888188007" Oct 04 04:51:50 crc kubenswrapper[4802]: E1004 04:51:50.827092 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cea264165872babb330a1a95d45e8a0fa90e754db861e750aacbdc888188007\": container with ID starting with 2cea264165872babb330a1a95d45e8a0fa90e754db861e750aacbdc888188007 not found: ID does not exist" containerID="2cea264165872babb330a1a95d45e8a0fa90e754db861e750aacbdc888188007" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.827160 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cea264165872babb330a1a95d45e8a0fa90e754db861e750aacbdc888188007"} err="failed to get container status \"2cea264165872babb330a1a95d45e8a0fa90e754db861e750aacbdc888188007\": rpc error: code = NotFound desc = could not find container \"2cea264165872babb330a1a95d45e8a0fa90e754db861e750aacbdc888188007\": container with ID starting with 2cea264165872babb330a1a95d45e8a0fa90e754db861e750aacbdc888188007 not found: ID does not exist" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.827200 4802 scope.go:117] "RemoveContainer" containerID="febd2030d0b49b82e0e08295982121ef08ddffb05f603f34a39462049773da01" Oct 04 04:51:50 crc kubenswrapper[4802]: E1004 04:51:50.827623 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"febd2030d0b49b82e0e08295982121ef08ddffb05f603f34a39462049773da01\": container with ID starting with febd2030d0b49b82e0e08295982121ef08ddffb05f603f34a39462049773da01 not found: ID does not exist" containerID="febd2030d0b49b82e0e08295982121ef08ddffb05f603f34a39462049773da01" Oct 04 04:51:50 crc kubenswrapper[4802]: I1004 04:51:50.827713 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"febd2030d0b49b82e0e08295982121ef08ddffb05f603f34a39462049773da01"} err="failed to get container status \"febd2030d0b49b82e0e08295982121ef08ddffb05f603f34a39462049773da01\": rpc error: code = NotFound desc = could not find container \"febd2030d0b49b82e0e08295982121ef08ddffb05f603f34a39462049773da01\": container with ID starting with febd2030d0b49b82e0e08295982121ef08ddffb05f603f34a39462049773da01 not found: ID does not exist" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.436502 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.570579 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175ac8cc-efac-4fd1-ba01-e224ab93757f-catalog-content\") pod \"175ac8cc-efac-4fd1-ba01-e224ab93757f\" (UID: \"175ac8cc-efac-4fd1-ba01-e224ab93757f\") " Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.570673 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175ac8cc-efac-4fd1-ba01-e224ab93757f-utilities\") pod \"175ac8cc-efac-4fd1-ba01-e224ab93757f\" (UID: \"175ac8cc-efac-4fd1-ba01-e224ab93757f\") " Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.570929 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn2r9\" (UniqueName: \"kubernetes.io/projected/175ac8cc-efac-4fd1-ba01-e224ab93757f-kube-api-access-wn2r9\") pod \"175ac8cc-efac-4fd1-ba01-e224ab93757f\" (UID: \"175ac8cc-efac-4fd1-ba01-e224ab93757f\") " Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.571604 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/175ac8cc-efac-4fd1-ba01-e224ab93757f-utilities" (OuterVolumeSpecName: "utilities") pod "175ac8cc-efac-4fd1-ba01-e224ab93757f" (UID: "175ac8cc-efac-4fd1-ba01-e224ab93757f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.583875 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175ac8cc-efac-4fd1-ba01-e224ab93757f-kube-api-access-wn2r9" (OuterVolumeSpecName: "kube-api-access-wn2r9") pod "175ac8cc-efac-4fd1-ba01-e224ab93757f" (UID: "175ac8cc-efac-4fd1-ba01-e224ab93757f"). InnerVolumeSpecName "kube-api-access-wn2r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.615931 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/175ac8cc-efac-4fd1-ba01-e224ab93757f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "175ac8cc-efac-4fd1-ba01-e224ab93757f" (UID: "175ac8cc-efac-4fd1-ba01-e224ab93757f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.672321 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175ac8cc-efac-4fd1-ba01-e224ab93757f-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.672360 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn2r9\" (UniqueName: \"kubernetes.io/projected/175ac8cc-efac-4fd1-ba01-e224ab93757f-kube-api-access-wn2r9\") on node \"crc\" DevicePath \"\"" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.672375 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175ac8cc-efac-4fd1-ba01-e224ab93757f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.752125 4802 generic.go:334] "Generic (PLEG): container finished" podID="175ac8cc-efac-4fd1-ba01-e224ab93757f" containerID="2ebb4c4727665b559e11bdb619b4d194da19ae9838d68d0f39a4657b2f6a106f" exitCode=0 Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.752191 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-76vwx" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.752216 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76vwx" event={"ID":"175ac8cc-efac-4fd1-ba01-e224ab93757f","Type":"ContainerDied","Data":"2ebb4c4727665b559e11bdb619b4d194da19ae9838d68d0f39a4657b2f6a106f"} Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.752867 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76vwx" event={"ID":"175ac8cc-efac-4fd1-ba01-e224ab93757f","Type":"ContainerDied","Data":"c0cec0ba99603ce0c43df3ef5323b105360f2ae5895145851d14635642664efc"} Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.752950 4802 scope.go:117] "RemoveContainer" containerID="2ebb4c4727665b559e11bdb619b4d194da19ae9838d68d0f39a4657b2f6a106f" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.776226 4802 scope.go:117] "RemoveContainer" containerID="2608e68413b4bc3463e42a47748c1d9b860a65eead9b73536a0c00e783f9e21f" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.785475 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-76vwx"] Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.789159 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-76vwx"] Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.814294 4802 scope.go:117] "RemoveContainer" containerID="60c9cee0ff5bc0ea7d24689dd6d5f21ba5c1e5be8d35370e7d0139de92c299e1" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.833673 4802 scope.go:117] "RemoveContainer" containerID="2ebb4c4727665b559e11bdb619b4d194da19ae9838d68d0f39a4657b2f6a106f" Oct 04 04:51:51 crc kubenswrapper[4802]: E1004 04:51:51.834204 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ebb4c4727665b559e11bdb619b4d194da19ae9838d68d0f39a4657b2f6a106f\": container with ID starting with 2ebb4c4727665b559e11bdb619b4d194da19ae9838d68d0f39a4657b2f6a106f not found: ID does not exist" containerID="2ebb4c4727665b559e11bdb619b4d194da19ae9838d68d0f39a4657b2f6a106f" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.834271 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ebb4c4727665b559e11bdb619b4d194da19ae9838d68d0f39a4657b2f6a106f"} err="failed to get container status \"2ebb4c4727665b559e11bdb619b4d194da19ae9838d68d0f39a4657b2f6a106f\": rpc error: code = NotFound desc = could not find container \"2ebb4c4727665b559e11bdb619b4d194da19ae9838d68d0f39a4657b2f6a106f\": container with ID starting with 2ebb4c4727665b559e11bdb619b4d194da19ae9838d68d0f39a4657b2f6a106f not found: ID does not exist" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.834311 4802 scope.go:117] "RemoveContainer" containerID="2608e68413b4bc3463e42a47748c1d9b860a65eead9b73536a0c00e783f9e21f" Oct 04 04:51:51 crc kubenswrapper[4802]: E1004 04:51:51.834736 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2608e68413b4bc3463e42a47748c1d9b860a65eead9b73536a0c00e783f9e21f\": container with ID starting with 2608e68413b4bc3463e42a47748c1d9b860a65eead9b73536a0c00e783f9e21f not found: ID does not exist" containerID="2608e68413b4bc3463e42a47748c1d9b860a65eead9b73536a0c00e783f9e21f" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.834763 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2608e68413b4bc3463e42a47748c1d9b860a65eead9b73536a0c00e783f9e21f"} err="failed to get container status \"2608e68413b4bc3463e42a47748c1d9b860a65eead9b73536a0c00e783f9e21f\": rpc error: code = NotFound desc = could not find container \"2608e68413b4bc3463e42a47748c1d9b860a65eead9b73536a0c00e783f9e21f\": container with ID starting with 2608e68413b4bc3463e42a47748c1d9b860a65eead9b73536a0c00e783f9e21f not found: ID does not exist" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.834781 4802 scope.go:117] "RemoveContainer" containerID="60c9cee0ff5bc0ea7d24689dd6d5f21ba5c1e5be8d35370e7d0139de92c299e1" Oct 04 04:51:51 crc kubenswrapper[4802]: E1004 04:51:51.835134 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60c9cee0ff5bc0ea7d24689dd6d5f21ba5c1e5be8d35370e7d0139de92c299e1\": container with ID starting with 60c9cee0ff5bc0ea7d24689dd6d5f21ba5c1e5be8d35370e7d0139de92c299e1 not found: ID does not exist" containerID="60c9cee0ff5bc0ea7d24689dd6d5f21ba5c1e5be8d35370e7d0139de92c299e1" Oct 04 04:51:51 crc kubenswrapper[4802]: I1004 04:51:51.835161 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c9cee0ff5bc0ea7d24689dd6d5f21ba5c1e5be8d35370e7d0139de92c299e1"} err="failed to get container status \"60c9cee0ff5bc0ea7d24689dd6d5f21ba5c1e5be8d35370e7d0139de92c299e1\": rpc error: code = NotFound desc = could not find container \"60c9cee0ff5bc0ea7d24689dd6d5f21ba5c1e5be8d35370e7d0139de92c299e1\": container with ID starting with 60c9cee0ff5bc0ea7d24689dd6d5f21ba5c1e5be8d35370e7d0139de92c299e1 not found: ID does not exist" Oct 04 04:51:52 crc kubenswrapper[4802]: I1004 04:51:52.367501 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175ac8cc-efac-4fd1-ba01-e224ab93757f" path="/var/lib/kubelet/pods/175ac8cc-efac-4fd1-ba01-e224ab93757f/volumes" Oct 04 04:51:52 crc kubenswrapper[4802]: I1004 04:51:52.368930 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c067018f-e13d-4b01-a1f7-49528fcd6397" path="/var/lib/kubelet/pods/c067018f-e13d-4b01-a1f7-49528fcd6397/volumes" Oct 04 04:51:54 crc kubenswrapper[4802]: I1004 04:51:54.114947 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m6c8r"] Oct 04 04:51:54 crc kubenswrapper[4802]: I1004 04:51:54.115196 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m6c8r" podUID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" containerName="registry-server" containerID="cri-o://e841884760a30c9f23a0811931888013d14a8d7d690b647375b7f6ad42068294" gracePeriod=2 Oct 04 04:51:54 crc kubenswrapper[4802]: I1004 04:51:54.774745 4802 generic.go:334] "Generic (PLEG): container finished" podID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" containerID="e841884760a30c9f23a0811931888013d14a8d7d690b647375b7f6ad42068294" exitCode=0 Oct 04 04:51:54 crc kubenswrapper[4802]: I1004 04:51:54.774845 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6c8r" event={"ID":"6dcaa0a7-ecad-45f4-9dec-510b38c25042","Type":"ContainerDied","Data":"e841884760a30c9f23a0811931888013d14a8d7d690b647375b7f6ad42068294"} Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.174751 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.323140 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcaa0a7-ecad-45f4-9dec-510b38c25042-utilities\") pod \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\" (UID: \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\") " Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.323328 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdk4f\" (UniqueName: \"kubernetes.io/projected/6dcaa0a7-ecad-45f4-9dec-510b38c25042-kube-api-access-wdk4f\") pod \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\" (UID: \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\") " Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.323360 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcaa0a7-ecad-45f4-9dec-510b38c25042-catalog-content\") pod \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\" (UID: \"6dcaa0a7-ecad-45f4-9dec-510b38c25042\") " Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.324309 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dcaa0a7-ecad-45f4-9dec-510b38c25042-utilities" (OuterVolumeSpecName: "utilities") pod "6dcaa0a7-ecad-45f4-9dec-510b38c25042" (UID: "6dcaa0a7-ecad-45f4-9dec-510b38c25042"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.344485 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dcaa0a7-ecad-45f4-9dec-510b38c25042-kube-api-access-wdk4f" (OuterVolumeSpecName: "kube-api-access-wdk4f") pod "6dcaa0a7-ecad-45f4-9dec-510b38c25042" (UID: "6dcaa0a7-ecad-45f4-9dec-510b38c25042"). InnerVolumeSpecName "kube-api-access-wdk4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.420611 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dcaa0a7-ecad-45f4-9dec-510b38c25042-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dcaa0a7-ecad-45f4-9dec-510b38c25042" (UID: "6dcaa0a7-ecad-45f4-9dec-510b38c25042"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.424344 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcaa0a7-ecad-45f4-9dec-510b38c25042-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.424424 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdk4f\" (UniqueName: \"kubernetes.io/projected/6dcaa0a7-ecad-45f4-9dec-510b38c25042-kube-api-access-wdk4f\") on node \"crc\" DevicePath \"\"" Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.424445 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcaa0a7-ecad-45f4-9dec-510b38c25042-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.783874 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6c8r" event={"ID":"6dcaa0a7-ecad-45f4-9dec-510b38c25042","Type":"ContainerDied","Data":"0a66442fa17acff7b51d8455b19b370f8bfcdc81382cf75be7d65e02a85d4480"} Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.783955 4802 scope.go:117] "RemoveContainer" containerID="e841884760a30c9f23a0811931888013d14a8d7d690b647375b7f6ad42068294" Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.784433 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6c8r" Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.805058 4802 scope.go:117] "RemoveContainer" containerID="78df99847baa78ec1d5ae1cfb622148c098d128ec916e39e8ff4dc53fb6493c1" Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.815438 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m6c8r"] Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.818534 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m6c8r"] Oct 04 04:51:55 crc kubenswrapper[4802]: I1004 04:51:55.842163 4802 scope.go:117] "RemoveContainer" containerID="0fcc79bc419bb3b460319ad6d239305625dc2184567db0d3dac329b8cadab605" Oct 04 04:51:56 crc kubenswrapper[4802]: I1004 04:51:56.367763 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" path="/var/lib/kubelet/pods/6dcaa0a7-ecad-45f4-9dec-510b38c25042/volumes" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.243089 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8frq9"] Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.244275 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8frq9" podUID="d30c5f5e-8390-4c6c-9dff-07157aa29319" containerName="registry-server" containerID="cri-o://3553a11c018156795c8c3758eecfea97608cd9aed2eff8f388029bab49052397" gracePeriod=30 Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.252899 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkmdf"] Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.253213 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wkmdf" podUID="01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" containerName="registry-server" containerID="cri-o://54775531b26088dd8af2f470830173c549f984045b669351cf91dd499419a4c3" gracePeriod=30 Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.267109 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cs42x"] Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.267433 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" podUID="cc36cbe1-f043-49df-bb90-158d61ac67ad" containerName="marketplace-operator" containerID="cri-o://ff15c88f6a08fea5bd1405b4e5d5da882526680ef96cad33464f3c5cc69c8952" gracePeriod=30 Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.285360 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwg79"] Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.285725 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qwg79" podUID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" containerName="registry-server" containerID="cri-o://92edb50f29445dae3dea0cb807f1e2dde3d30951e66d203dbbe7ba63cac36c0f" gracePeriod=30 Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.322335 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x2tsz"] Oct 04 04:52:48 crc kubenswrapper[4802]: E1004 04:52:48.323158 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="181c60ac-6236-4b41-9d35-4c897b437ae3" containerName="pruner" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323210 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="181c60ac-6236-4b41-9d35-4c897b437ae3" containerName="pruner" Oct 04 04:52:48 crc kubenswrapper[4802]: E1004 04:52:48.323227 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a072d485-e16f-4778-8ca1-6b48bbc9f397" containerName="extract-content" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323235 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a072d485-e16f-4778-8ca1-6b48bbc9f397" containerName="extract-content" Oct 04 04:52:48 crc kubenswrapper[4802]: E1004 04:52:48.323248 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" containerName="extract-content" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323257 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" containerName="extract-content" Oct 04 04:52:48 crc kubenswrapper[4802]: E1004 04:52:48.323271 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175ac8cc-efac-4fd1-ba01-e224ab93757f" containerName="registry-server" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323277 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="175ac8cc-efac-4fd1-ba01-e224ab93757f" containerName="registry-server" Oct 04 04:52:48 crc kubenswrapper[4802]: E1004 04:52:48.323291 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" containerName="extract-utilities" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323297 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" containerName="extract-utilities" Oct 04 04:52:48 crc kubenswrapper[4802]: E1004 04:52:48.323322 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175ac8cc-efac-4fd1-ba01-e224ab93757f" containerName="extract-content" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323346 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="175ac8cc-efac-4fd1-ba01-e224ab93757f" containerName="extract-content" Oct 04 04:52:48 crc kubenswrapper[4802]: E1004 04:52:48.323371 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c067018f-e13d-4b01-a1f7-49528fcd6397" containerName="registry-server" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323378 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c067018f-e13d-4b01-a1f7-49528fcd6397" containerName="registry-server" Oct 04 04:52:48 crc kubenswrapper[4802]: E1004 04:52:48.323391 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c067018f-e13d-4b01-a1f7-49528fcd6397" containerName="extract-content" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323397 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c067018f-e13d-4b01-a1f7-49528fcd6397" containerName="extract-content" Oct 04 04:52:48 crc kubenswrapper[4802]: E1004 04:52:48.323426 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a072d485-e16f-4778-8ca1-6b48bbc9f397" containerName="registry-server" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323440 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a072d485-e16f-4778-8ca1-6b48bbc9f397" containerName="registry-server" Oct 04 04:52:48 crc kubenswrapper[4802]: E1004 04:52:48.323454 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a072d485-e16f-4778-8ca1-6b48bbc9f397" containerName="extract-utilities" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323460 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a072d485-e16f-4778-8ca1-6b48bbc9f397" containerName="extract-utilities" Oct 04 04:52:48 crc kubenswrapper[4802]: E1004 04:52:48.323478 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175ac8cc-efac-4fd1-ba01-e224ab93757f" containerName="extract-utilities" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323484 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="175ac8cc-efac-4fd1-ba01-e224ab93757f" containerName="extract-utilities" Oct 04 04:52:48 crc kubenswrapper[4802]: E1004 04:52:48.323505 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c067018f-e13d-4b01-a1f7-49528fcd6397" containerName="extract-utilities" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323512 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c067018f-e13d-4b01-a1f7-49528fcd6397" containerName="extract-utilities" Oct 04 04:52:48 crc kubenswrapper[4802]: E1004 04:52:48.323525 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" containerName="registry-server" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323531 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" containerName="registry-server" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323874 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="c067018f-e13d-4b01-a1f7-49528fcd6397" containerName="registry-server" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323886 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="175ac8cc-efac-4fd1-ba01-e224ab93757f" containerName="registry-server" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323897 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="181c60ac-6236-4b41-9d35-4c897b437ae3" containerName="pruner" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323915 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="a072d485-e16f-4778-8ca1-6b48bbc9f397" containerName="registry-server" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.323931 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dcaa0a7-ecad-45f4-9dec-510b38c25042" containerName="registry-server" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.324741 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.327563 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-klg7m"] Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.328033 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-klg7m" podUID="88d556c4-4775-463e-bfc7-c766fa10fce2" containerName="registry-server" containerID="cri-o://d64071c0113e964196f57ba09a53426f83c23439cf9098bf6962dbc65ce8ea85" gracePeriod=30 Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.336734 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x2tsz"] Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.493908 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/709abe0b-3c8d-4646-bdce-0a38e7e406f8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x2tsz\" (UID: \"709abe0b-3c8d-4646-bdce-0a38e7e406f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.494355 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gm79\" (UniqueName: \"kubernetes.io/projected/709abe0b-3c8d-4646-bdce-0a38e7e406f8-kube-api-access-2gm79\") pod \"marketplace-operator-79b997595-x2tsz\" (UID: \"709abe0b-3c8d-4646-bdce-0a38e7e406f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.494478 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/709abe0b-3c8d-4646-bdce-0a38e7e406f8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x2tsz\" (UID: \"709abe0b-3c8d-4646-bdce-0a38e7e406f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.595659 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/709abe0b-3c8d-4646-bdce-0a38e7e406f8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x2tsz\" (UID: \"709abe0b-3c8d-4646-bdce-0a38e7e406f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.595730 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gm79\" (UniqueName: \"kubernetes.io/projected/709abe0b-3c8d-4646-bdce-0a38e7e406f8-kube-api-access-2gm79\") pod \"marketplace-operator-79b997595-x2tsz\" (UID: \"709abe0b-3c8d-4646-bdce-0a38e7e406f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.595774 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/709abe0b-3c8d-4646-bdce-0a38e7e406f8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x2tsz\" (UID: \"709abe0b-3c8d-4646-bdce-0a38e7e406f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.597777 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/709abe0b-3c8d-4646-bdce-0a38e7e406f8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x2tsz\" (UID: \"709abe0b-3c8d-4646-bdce-0a38e7e406f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.615182 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/709abe0b-3c8d-4646-bdce-0a38e7e406f8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x2tsz\" (UID: \"709abe0b-3c8d-4646-bdce-0a38e7e406f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.617283 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gm79\" (UniqueName: \"kubernetes.io/projected/709abe0b-3c8d-4646-bdce-0a38e7e406f8-kube-api-access-2gm79\") pod \"marketplace-operator-79b997595-x2tsz\" (UID: \"709abe0b-3c8d-4646-bdce-0a38e7e406f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.760036 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.764979 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.776184 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.790362 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.813047 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.838270 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.900089 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-catalog-content\") pod \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\" (UID: \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\") " Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.900145 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30c5f5e-8390-4c6c-9dff-07157aa29319-catalog-content\") pod \"d30c5f5e-8390-4c6c-9dff-07157aa29319\" (UID: \"d30c5f5e-8390-4c6c-9dff-07157aa29319\") " Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.900191 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pnhh\" (UniqueName: \"kubernetes.io/projected/9f9d61a9-b985-45da-bacf-c9fb55b52b66-kube-api-access-4pnhh\") pod \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\" (UID: \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\") " Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.900251 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwmxp\" (UniqueName: \"kubernetes.io/projected/d30c5f5e-8390-4c6c-9dff-07157aa29319-kube-api-access-kwmxp\") pod \"d30c5f5e-8390-4c6c-9dff-07157aa29319\" (UID: \"d30c5f5e-8390-4c6c-9dff-07157aa29319\") " Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.900319 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9d61a9-b985-45da-bacf-c9fb55b52b66-utilities\") pod \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\" (UID: \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\") " Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.900347 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-utilities\") pod \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\" (UID: \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\") " Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.900378 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9d61a9-b985-45da-bacf-c9fb55b52b66-catalog-content\") pod \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\" (UID: \"9f9d61a9-b985-45da-bacf-c9fb55b52b66\") " Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.900421 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30c5f5e-8390-4c6c-9dff-07157aa29319-utilities\") pod \"d30c5f5e-8390-4c6c-9dff-07157aa29319\" (UID: \"d30c5f5e-8390-4c6c-9dff-07157aa29319\") " Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.900451 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s72b\" (UniqueName: \"kubernetes.io/projected/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-kube-api-access-5s72b\") pod \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\" (UID: \"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3\") " Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.902216 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9d61a9-b985-45da-bacf-c9fb55b52b66-utilities" (OuterVolumeSpecName: "utilities") pod "9f9d61a9-b985-45da-bacf-c9fb55b52b66" (UID: "9f9d61a9-b985-45da-bacf-c9fb55b52b66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.902214 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-utilities" (OuterVolumeSpecName: "utilities") pod "01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" (UID: "01bacf77-5ec5-42c6-af1a-d27ffc2f26e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.903435 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d30c5f5e-8390-4c6c-9dff-07157aa29319-utilities" (OuterVolumeSpecName: "utilities") pod "d30c5f5e-8390-4c6c-9dff-07157aa29319" (UID: "d30c5f5e-8390-4c6c-9dff-07157aa29319"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.906403 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30c5f5e-8390-4c6c-9dff-07157aa29319-kube-api-access-kwmxp" (OuterVolumeSpecName: "kube-api-access-kwmxp") pod "d30c5f5e-8390-4c6c-9dff-07157aa29319" (UID: "d30c5f5e-8390-4c6c-9dff-07157aa29319"). InnerVolumeSpecName "kube-api-access-kwmxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.907016 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-kube-api-access-5s72b" (OuterVolumeSpecName: "kube-api-access-5s72b") pod "01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" (UID: "01bacf77-5ec5-42c6-af1a-d27ffc2f26e3"). InnerVolumeSpecName "kube-api-access-5s72b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.907110 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9d61a9-b985-45da-bacf-c9fb55b52b66-kube-api-access-4pnhh" (OuterVolumeSpecName: "kube-api-access-4pnhh") pod "9f9d61a9-b985-45da-bacf-c9fb55b52b66" (UID: "9f9d61a9-b985-45da-bacf-c9fb55b52b66"). InnerVolumeSpecName "kube-api-access-4pnhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.921776 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9d61a9-b985-45da-bacf-c9fb55b52b66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f9d61a9-b985-45da-bacf-c9fb55b52b66" (UID: "9f9d61a9-b985-45da-bacf-c9fb55b52b66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.967327 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" (UID: "01bacf77-5ec5-42c6-af1a-d27ffc2f26e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:52:48 crc kubenswrapper[4802]: I1004 04:52:48.976228 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d30c5f5e-8390-4c6c-9dff-07157aa29319-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d30c5f5e-8390-4c6c-9dff-07157aa29319" (UID: "d30c5f5e-8390-4c6c-9dff-07157aa29319"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:48.999607 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x2tsz"] Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.001950 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8w4b\" (UniqueName: \"kubernetes.io/projected/88d556c4-4775-463e-bfc7-c766fa10fce2-kube-api-access-c8w4b\") pod \"88d556c4-4775-463e-bfc7-c766fa10fce2\" (UID: \"88d556c4-4775-463e-bfc7-c766fa10fce2\") " Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.002074 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88d556c4-4775-463e-bfc7-c766fa10fce2-utilities\") pod \"88d556c4-4775-463e-bfc7-c766fa10fce2\" (UID: \"88d556c4-4775-463e-bfc7-c766fa10fce2\") " Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.002190 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc36cbe1-f043-49df-bb90-158d61ac67ad-marketplace-trusted-ca\") pod \"cc36cbe1-f043-49df-bb90-158d61ac67ad\" (UID: \"cc36cbe1-f043-49df-bb90-158d61ac67ad\") " Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.002240 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbf84\" (UniqueName: \"kubernetes.io/projected/cc36cbe1-f043-49df-bb90-158d61ac67ad-kube-api-access-jbf84\") pod \"cc36cbe1-f043-49df-bb90-158d61ac67ad\" (UID: \"cc36cbe1-f043-49df-bb90-158d61ac67ad\") " Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.002307 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88d556c4-4775-463e-bfc7-c766fa10fce2-catalog-content\") pod \"88d556c4-4775-463e-bfc7-c766fa10fce2\" (UID: \"88d556c4-4775-463e-bfc7-c766fa10fce2\") " Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.002377 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cc36cbe1-f043-49df-bb90-158d61ac67ad-marketplace-operator-metrics\") pod \"cc36cbe1-f043-49df-bb90-158d61ac67ad\" (UID: \"cc36cbe1-f043-49df-bb90-158d61ac67ad\") " Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.002992 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc36cbe1-f043-49df-bb90-158d61ac67ad-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "cc36cbe1-f043-49df-bb90-158d61ac67ad" (UID: "cc36cbe1-f043-49df-bb90-158d61ac67ad"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.003182 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88d556c4-4775-463e-bfc7-c766fa10fce2-utilities" (OuterVolumeSpecName: "utilities") pod "88d556c4-4775-463e-bfc7-c766fa10fce2" (UID: "88d556c4-4775-463e-bfc7-c766fa10fce2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.005702 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d556c4-4775-463e-bfc7-c766fa10fce2-kube-api-access-c8w4b" (OuterVolumeSpecName: "kube-api-access-c8w4b") pod "88d556c4-4775-463e-bfc7-c766fa10fce2" (UID: "88d556c4-4775-463e-bfc7-c766fa10fce2"). InnerVolumeSpecName "kube-api-access-c8w4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.006872 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc36cbe1-f043-49df-bb90-158d61ac67ad-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "cc36cbe1-f043-49df-bb90-158d61ac67ad" (UID: "cc36cbe1-f043-49df-bb90-158d61ac67ad"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.008948 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc36cbe1-f043-49df-bb90-158d61ac67ad-kube-api-access-jbf84" (OuterVolumeSpecName: "kube-api-access-jbf84") pod "cc36cbe1-f043-49df-bb90-158d61ac67ad" (UID: "cc36cbe1-f043-49df-bb90-158d61ac67ad"). InnerVolumeSpecName "kube-api-access-jbf84". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.012856 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwmxp\" (UniqueName: \"kubernetes.io/projected/d30c5f5e-8390-4c6c-9dff-07157aa29319-kube-api-access-kwmxp\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.012913 4802 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cc36cbe1-f043-49df-bb90-158d61ac67ad-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.012928 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9d61a9-b985-45da-bacf-c9fb55b52b66-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.012940 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.012949 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9d61a9-b985-45da-bacf-c9fb55b52b66-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.012962 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8w4b\" (UniqueName: \"kubernetes.io/projected/88d556c4-4775-463e-bfc7-c766fa10fce2-kube-api-access-c8w4b\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.012972 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30c5f5e-8390-4c6c-9dff-07157aa29319-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.012980 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s72b\" (UniqueName: \"kubernetes.io/projected/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-kube-api-access-5s72b\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.012991 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88d556c4-4775-463e-bfc7-c766fa10fce2-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.013000 4802 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc36cbe1-f043-49df-bb90-158d61ac67ad-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.013008 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.013017 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30c5f5e-8390-4c6c-9dff-07157aa29319-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.013027 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbf84\" (UniqueName: \"kubernetes.io/projected/cc36cbe1-f043-49df-bb90-158d61ac67ad-kube-api-access-jbf84\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.013036 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pnhh\" (UniqueName: \"kubernetes.io/projected/9f9d61a9-b985-45da-bacf-c9fb55b52b66-kube-api-access-4pnhh\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.097539 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88d556c4-4775-463e-bfc7-c766fa10fce2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88d556c4-4775-463e-bfc7-c766fa10fce2" (UID: "88d556c4-4775-463e-bfc7-c766fa10fce2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.111462 4802 generic.go:334] "Generic (PLEG): container finished" podID="cc36cbe1-f043-49df-bb90-158d61ac67ad" containerID="ff15c88f6a08fea5bd1405b4e5d5da882526680ef96cad33464f3c5cc69c8952" exitCode=0 Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.111535 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.111544 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" event={"ID":"cc36cbe1-f043-49df-bb90-158d61ac67ad","Type":"ContainerDied","Data":"ff15c88f6a08fea5bd1405b4e5d5da882526680ef96cad33464f3c5cc69c8952"} Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.111579 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cs42x" event={"ID":"cc36cbe1-f043-49df-bb90-158d61ac67ad","Type":"ContainerDied","Data":"4777515bb1744dd02220bff607b6976c8032791ffa4fd6f05afe91c8b5b0eb87"} Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.111599 4802 scope.go:117] "RemoveContainer" containerID="ff15c88f6a08fea5bd1405b4e5d5da882526680ef96cad33464f3c5cc69c8952" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.114563 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88d556c4-4775-463e-bfc7-c766fa10fce2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.116323 4802 generic.go:334] "Generic (PLEG): container finished" podID="01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" containerID="54775531b26088dd8af2f470830173c549f984045b669351cf91dd499419a4c3" exitCode=0 Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.116383 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkmdf" event={"ID":"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3","Type":"ContainerDied","Data":"54775531b26088dd8af2f470830173c549f984045b669351cf91dd499419a4c3"} Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.116408 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkmdf" event={"ID":"01bacf77-5ec5-42c6-af1a-d27ffc2f26e3","Type":"ContainerDied","Data":"ccdd38f286918461a5badb4c9d7bdfb05cd950d550a0102927995ae283b16f58"} Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.116477 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkmdf" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.123179 4802 generic.go:334] "Generic (PLEG): container finished" podID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" containerID="92edb50f29445dae3dea0cb807f1e2dde3d30951e66d203dbbe7ba63cac36c0f" exitCode=0 Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.123245 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwg79" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.123279 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwg79" event={"ID":"9f9d61a9-b985-45da-bacf-c9fb55b52b66","Type":"ContainerDied","Data":"92edb50f29445dae3dea0cb807f1e2dde3d30951e66d203dbbe7ba63cac36c0f"} Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.123450 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwg79" event={"ID":"9f9d61a9-b985-45da-bacf-c9fb55b52b66","Type":"ContainerDied","Data":"c330bbea77d72b1fb7db9daa155a0e2559442f60e8d4f04ed7dfc1a027bad612"} Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.129462 4802 generic.go:334] "Generic (PLEG): container finished" podID="d30c5f5e-8390-4c6c-9dff-07157aa29319" containerID="3553a11c018156795c8c3758eecfea97608cd9aed2eff8f388029bab49052397" exitCode=0 Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.129579 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8frq9" event={"ID":"d30c5f5e-8390-4c6c-9dff-07157aa29319","Type":"ContainerDied","Data":"3553a11c018156795c8c3758eecfea97608cd9aed2eff8f388029bab49052397"} Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.129623 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8frq9" event={"ID":"d30c5f5e-8390-4c6c-9dff-07157aa29319","Type":"ContainerDied","Data":"14be587530e59fe2d4373d8706a0ebb67c55af54c5056cc12e064f581cd32a15"} Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.129714 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8frq9" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.132870 4802 scope.go:117] "RemoveContainer" containerID="ff15c88f6a08fea5bd1405b4e5d5da882526680ef96cad33464f3c5cc69c8952" Oct 04 04:52:49 crc kubenswrapper[4802]: E1004 04:52:49.134206 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff15c88f6a08fea5bd1405b4e5d5da882526680ef96cad33464f3c5cc69c8952\": container with ID starting with ff15c88f6a08fea5bd1405b4e5d5da882526680ef96cad33464f3c5cc69c8952 not found: ID does not exist" containerID="ff15c88f6a08fea5bd1405b4e5d5da882526680ef96cad33464f3c5cc69c8952" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.134326 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff15c88f6a08fea5bd1405b4e5d5da882526680ef96cad33464f3c5cc69c8952"} err="failed to get container status \"ff15c88f6a08fea5bd1405b4e5d5da882526680ef96cad33464f3c5cc69c8952\": rpc error: code = NotFound desc = could not find container \"ff15c88f6a08fea5bd1405b4e5d5da882526680ef96cad33464f3c5cc69c8952\": container with ID starting with ff15c88f6a08fea5bd1405b4e5d5da882526680ef96cad33464f3c5cc69c8952 not found: ID does not exist" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.134430 4802 scope.go:117] "RemoveContainer" containerID="54775531b26088dd8af2f470830173c549f984045b669351cf91dd499419a4c3" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.135692 4802 generic.go:334] "Generic (PLEG): container finished" podID="88d556c4-4775-463e-bfc7-c766fa10fce2" containerID="d64071c0113e964196f57ba09a53426f83c23439cf9098bf6962dbc65ce8ea85" exitCode=0 Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.135959 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klg7m" event={"ID":"88d556c4-4775-463e-bfc7-c766fa10fce2","Type":"ContainerDied","Data":"d64071c0113e964196f57ba09a53426f83c23439cf9098bf6962dbc65ce8ea85"} Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.135974 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klg7m" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.135997 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klg7m" event={"ID":"88d556c4-4775-463e-bfc7-c766fa10fce2","Type":"ContainerDied","Data":"82662a051e4f8677812dd3735e02dfe546d2a6495558988fc64eac5036c96e32"} Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.143259 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" event={"ID":"709abe0b-3c8d-4646-bdce-0a38e7e406f8","Type":"ContainerStarted","Data":"fcdbf202b7c4a14aca4b4ecdb53b0fb6c02a49e4790c139eb63c681fb384ef41"} Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.145062 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cs42x"] Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.148840 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cs42x"] Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.166129 4802 scope.go:117] "RemoveContainer" containerID="568bd0471a6bab648d58daeb7111f8a68b2361dce5c13d9d7249c86234345b18" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.166780 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkmdf"] Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.171054 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wkmdf"] Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.187785 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwg79"] Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.189697 4802 scope.go:117] "RemoveContainer" containerID="0640c9a459829e77fea175ce27bb4297899e903dd0edea91e4c1bdd2cb16197d" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.191456 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwg79"] Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.200975 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-klg7m"] Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.204962 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-klg7m"] Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.215981 4802 scope.go:117] "RemoveContainer" containerID="54775531b26088dd8af2f470830173c549f984045b669351cf91dd499419a4c3" Oct 04 04:52:49 crc kubenswrapper[4802]: E1004 04:52:49.216617 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54775531b26088dd8af2f470830173c549f984045b669351cf91dd499419a4c3\": container with ID starting with 54775531b26088dd8af2f470830173c549f984045b669351cf91dd499419a4c3 not found: ID does not exist" containerID="54775531b26088dd8af2f470830173c549f984045b669351cf91dd499419a4c3" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.216677 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54775531b26088dd8af2f470830173c549f984045b669351cf91dd499419a4c3"} err="failed to get container status \"54775531b26088dd8af2f470830173c549f984045b669351cf91dd499419a4c3\": rpc error: code = NotFound desc = could not find container \"54775531b26088dd8af2f470830173c549f984045b669351cf91dd499419a4c3\": container with ID starting with 54775531b26088dd8af2f470830173c549f984045b669351cf91dd499419a4c3 not found: ID does not exist" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.216708 4802 scope.go:117] "RemoveContainer" containerID="568bd0471a6bab648d58daeb7111f8a68b2361dce5c13d9d7249c86234345b18" Oct 04 04:52:49 crc kubenswrapper[4802]: E1004 04:52:49.217678 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"568bd0471a6bab648d58daeb7111f8a68b2361dce5c13d9d7249c86234345b18\": container with ID starting with 568bd0471a6bab648d58daeb7111f8a68b2361dce5c13d9d7249c86234345b18 not found: ID does not exist" containerID="568bd0471a6bab648d58daeb7111f8a68b2361dce5c13d9d7249c86234345b18" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.217715 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"568bd0471a6bab648d58daeb7111f8a68b2361dce5c13d9d7249c86234345b18"} err="failed to get container status \"568bd0471a6bab648d58daeb7111f8a68b2361dce5c13d9d7249c86234345b18\": rpc error: code = NotFound desc = could not find container \"568bd0471a6bab648d58daeb7111f8a68b2361dce5c13d9d7249c86234345b18\": container with ID starting with 568bd0471a6bab648d58daeb7111f8a68b2361dce5c13d9d7249c86234345b18 not found: ID does not exist" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.217742 4802 scope.go:117] "RemoveContainer" containerID="0640c9a459829e77fea175ce27bb4297899e903dd0edea91e4c1bdd2cb16197d" Oct 04 04:52:49 crc kubenswrapper[4802]: E1004 04:52:49.218215 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0640c9a459829e77fea175ce27bb4297899e903dd0edea91e4c1bdd2cb16197d\": container with ID starting with 0640c9a459829e77fea175ce27bb4297899e903dd0edea91e4c1bdd2cb16197d not found: ID does not exist" containerID="0640c9a459829e77fea175ce27bb4297899e903dd0edea91e4c1bdd2cb16197d" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.218260 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0640c9a459829e77fea175ce27bb4297899e903dd0edea91e4c1bdd2cb16197d"} err="failed to get container status \"0640c9a459829e77fea175ce27bb4297899e903dd0edea91e4c1bdd2cb16197d\": rpc error: code = NotFound desc = could not find container \"0640c9a459829e77fea175ce27bb4297899e903dd0edea91e4c1bdd2cb16197d\": container with ID starting with 0640c9a459829e77fea175ce27bb4297899e903dd0edea91e4c1bdd2cb16197d not found: ID does not exist" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.218276 4802 scope.go:117] "RemoveContainer" containerID="92edb50f29445dae3dea0cb807f1e2dde3d30951e66d203dbbe7ba63cac36c0f" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.221822 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8frq9"] Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.226377 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8frq9"] Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.237568 4802 scope.go:117] "RemoveContainer" containerID="a0a3072b8e422c11197058b05645c24f7359ed3ab1866cd689c27590030e083f" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.268105 4802 scope.go:117] "RemoveContainer" containerID="120ff22186badd6c4a1c2a721de20e31e24d72382ec0ec3877d97f18d830ec45" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.291249 4802 scope.go:117] "RemoveContainer" containerID="92edb50f29445dae3dea0cb807f1e2dde3d30951e66d203dbbe7ba63cac36c0f" Oct 04 04:52:49 crc kubenswrapper[4802]: E1004 04:52:49.292379 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92edb50f29445dae3dea0cb807f1e2dde3d30951e66d203dbbe7ba63cac36c0f\": container with ID starting with 92edb50f29445dae3dea0cb807f1e2dde3d30951e66d203dbbe7ba63cac36c0f not found: ID does not exist" containerID="92edb50f29445dae3dea0cb807f1e2dde3d30951e66d203dbbe7ba63cac36c0f" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.292478 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92edb50f29445dae3dea0cb807f1e2dde3d30951e66d203dbbe7ba63cac36c0f"} err="failed to get container status \"92edb50f29445dae3dea0cb807f1e2dde3d30951e66d203dbbe7ba63cac36c0f\": rpc error: code = NotFound desc = could not find container \"92edb50f29445dae3dea0cb807f1e2dde3d30951e66d203dbbe7ba63cac36c0f\": container with ID starting with 92edb50f29445dae3dea0cb807f1e2dde3d30951e66d203dbbe7ba63cac36c0f not found: ID does not exist" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.292527 4802 scope.go:117] "RemoveContainer" containerID="a0a3072b8e422c11197058b05645c24f7359ed3ab1866cd689c27590030e083f" Oct 04 04:52:49 crc kubenswrapper[4802]: E1004 04:52:49.293042 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a3072b8e422c11197058b05645c24f7359ed3ab1866cd689c27590030e083f\": container with ID starting with a0a3072b8e422c11197058b05645c24f7359ed3ab1866cd689c27590030e083f not found: ID does not exist" containerID="a0a3072b8e422c11197058b05645c24f7359ed3ab1866cd689c27590030e083f" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.293095 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a3072b8e422c11197058b05645c24f7359ed3ab1866cd689c27590030e083f"} err="failed to get container status \"a0a3072b8e422c11197058b05645c24f7359ed3ab1866cd689c27590030e083f\": rpc error: code = NotFound desc = could not find container \"a0a3072b8e422c11197058b05645c24f7359ed3ab1866cd689c27590030e083f\": container with ID starting with a0a3072b8e422c11197058b05645c24f7359ed3ab1866cd689c27590030e083f not found: ID does not exist" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.293130 4802 scope.go:117] "RemoveContainer" containerID="120ff22186badd6c4a1c2a721de20e31e24d72382ec0ec3877d97f18d830ec45" Oct 04 04:52:49 crc kubenswrapper[4802]: E1004 04:52:49.293899 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"120ff22186badd6c4a1c2a721de20e31e24d72382ec0ec3877d97f18d830ec45\": container with ID starting with 120ff22186badd6c4a1c2a721de20e31e24d72382ec0ec3877d97f18d830ec45 not found: ID does not exist" containerID="120ff22186badd6c4a1c2a721de20e31e24d72382ec0ec3877d97f18d830ec45" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.293938 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"120ff22186badd6c4a1c2a721de20e31e24d72382ec0ec3877d97f18d830ec45"} err="failed to get container status \"120ff22186badd6c4a1c2a721de20e31e24d72382ec0ec3877d97f18d830ec45\": rpc error: code = NotFound desc = could not find container \"120ff22186badd6c4a1c2a721de20e31e24d72382ec0ec3877d97f18d830ec45\": container with ID starting with 120ff22186badd6c4a1c2a721de20e31e24d72382ec0ec3877d97f18d830ec45 not found: ID does not exist" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.293964 4802 scope.go:117] "RemoveContainer" containerID="3553a11c018156795c8c3758eecfea97608cd9aed2eff8f388029bab49052397" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.311672 4802 scope.go:117] "RemoveContainer" containerID="a16887d23b3b4a23ee89d37e54ab5faef674e68f985d681b3da982a450b01b6d" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.332204 4802 scope.go:117] "RemoveContainer" containerID="bf80a7c6868ccc541247c2959c2a14a66c207ad8ac8b346495913ea9e948a51f" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.347316 4802 scope.go:117] "RemoveContainer" containerID="3553a11c018156795c8c3758eecfea97608cd9aed2eff8f388029bab49052397" Oct 04 04:52:49 crc kubenswrapper[4802]: E1004 04:52:49.347968 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3553a11c018156795c8c3758eecfea97608cd9aed2eff8f388029bab49052397\": container with ID starting with 3553a11c018156795c8c3758eecfea97608cd9aed2eff8f388029bab49052397 not found: ID does not exist" containerID="3553a11c018156795c8c3758eecfea97608cd9aed2eff8f388029bab49052397" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.348010 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3553a11c018156795c8c3758eecfea97608cd9aed2eff8f388029bab49052397"} err="failed to get container status \"3553a11c018156795c8c3758eecfea97608cd9aed2eff8f388029bab49052397\": rpc error: code = NotFound desc = could not find container \"3553a11c018156795c8c3758eecfea97608cd9aed2eff8f388029bab49052397\": container with ID starting with 3553a11c018156795c8c3758eecfea97608cd9aed2eff8f388029bab49052397 not found: ID does not exist" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.348042 4802 scope.go:117] "RemoveContainer" containerID="a16887d23b3b4a23ee89d37e54ab5faef674e68f985d681b3da982a450b01b6d" Oct 04 04:52:49 crc kubenswrapper[4802]: E1004 04:52:49.348527 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16887d23b3b4a23ee89d37e54ab5faef674e68f985d681b3da982a450b01b6d\": container with ID starting with a16887d23b3b4a23ee89d37e54ab5faef674e68f985d681b3da982a450b01b6d not found: ID does not exist" containerID="a16887d23b3b4a23ee89d37e54ab5faef674e68f985d681b3da982a450b01b6d" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.348577 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16887d23b3b4a23ee89d37e54ab5faef674e68f985d681b3da982a450b01b6d"} err="failed to get container status \"a16887d23b3b4a23ee89d37e54ab5faef674e68f985d681b3da982a450b01b6d\": rpc error: code = NotFound desc = could not find container \"a16887d23b3b4a23ee89d37e54ab5faef674e68f985d681b3da982a450b01b6d\": container with ID starting with a16887d23b3b4a23ee89d37e54ab5faef674e68f985d681b3da982a450b01b6d not found: ID does not exist" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.348611 4802 scope.go:117] "RemoveContainer" containerID="bf80a7c6868ccc541247c2959c2a14a66c207ad8ac8b346495913ea9e948a51f" Oct 04 04:52:49 crc kubenswrapper[4802]: E1004 04:52:49.348991 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf80a7c6868ccc541247c2959c2a14a66c207ad8ac8b346495913ea9e948a51f\": container with ID starting with bf80a7c6868ccc541247c2959c2a14a66c207ad8ac8b346495913ea9e948a51f not found: ID does not exist" containerID="bf80a7c6868ccc541247c2959c2a14a66c207ad8ac8b346495913ea9e948a51f" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.349016 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf80a7c6868ccc541247c2959c2a14a66c207ad8ac8b346495913ea9e948a51f"} err="failed to get container status \"bf80a7c6868ccc541247c2959c2a14a66c207ad8ac8b346495913ea9e948a51f\": rpc error: code = NotFound desc = could not find container \"bf80a7c6868ccc541247c2959c2a14a66c207ad8ac8b346495913ea9e948a51f\": container with ID starting with bf80a7c6868ccc541247c2959c2a14a66c207ad8ac8b346495913ea9e948a51f not found: ID does not exist" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.349031 4802 scope.go:117] "RemoveContainer" containerID="d64071c0113e964196f57ba09a53426f83c23439cf9098bf6962dbc65ce8ea85" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.366096 4802 scope.go:117] "RemoveContainer" containerID="dd224311c6732f0253c2c0cabaf2476a7628101d8cd57062c96db921b42bf078" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.385522 4802 scope.go:117] "RemoveContainer" containerID="5adb20b92bc42def1f8e8a0c29f8e5092c17fafda2a18e2af5a0b004043c1e55" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.401558 4802 scope.go:117] "RemoveContainer" containerID="d64071c0113e964196f57ba09a53426f83c23439cf9098bf6962dbc65ce8ea85" Oct 04 04:52:49 crc kubenswrapper[4802]: E1004 04:52:49.402135 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d64071c0113e964196f57ba09a53426f83c23439cf9098bf6962dbc65ce8ea85\": container with ID starting with d64071c0113e964196f57ba09a53426f83c23439cf9098bf6962dbc65ce8ea85 not found: ID does not exist" containerID="d64071c0113e964196f57ba09a53426f83c23439cf9098bf6962dbc65ce8ea85" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.402183 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d64071c0113e964196f57ba09a53426f83c23439cf9098bf6962dbc65ce8ea85"} err="failed to get container status \"d64071c0113e964196f57ba09a53426f83c23439cf9098bf6962dbc65ce8ea85\": rpc error: code = NotFound desc = could not find container \"d64071c0113e964196f57ba09a53426f83c23439cf9098bf6962dbc65ce8ea85\": container with ID starting with d64071c0113e964196f57ba09a53426f83c23439cf9098bf6962dbc65ce8ea85 not found: ID does not exist" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.402220 4802 scope.go:117] "RemoveContainer" containerID="dd224311c6732f0253c2c0cabaf2476a7628101d8cd57062c96db921b42bf078" Oct 04 04:52:49 crc kubenswrapper[4802]: E1004 04:52:49.402566 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd224311c6732f0253c2c0cabaf2476a7628101d8cd57062c96db921b42bf078\": container with ID starting with dd224311c6732f0253c2c0cabaf2476a7628101d8cd57062c96db921b42bf078 not found: ID does not exist" containerID="dd224311c6732f0253c2c0cabaf2476a7628101d8cd57062c96db921b42bf078" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.402610 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd224311c6732f0253c2c0cabaf2476a7628101d8cd57062c96db921b42bf078"} err="failed to get container status \"dd224311c6732f0253c2c0cabaf2476a7628101d8cd57062c96db921b42bf078\": rpc error: code = NotFound desc = could not find container \"dd224311c6732f0253c2c0cabaf2476a7628101d8cd57062c96db921b42bf078\": container with ID starting with dd224311c6732f0253c2c0cabaf2476a7628101d8cd57062c96db921b42bf078 not found: ID does not exist" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.402651 4802 scope.go:117] "RemoveContainer" containerID="5adb20b92bc42def1f8e8a0c29f8e5092c17fafda2a18e2af5a0b004043c1e55" Oct 04 04:52:49 crc kubenswrapper[4802]: E1004 04:52:49.402900 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5adb20b92bc42def1f8e8a0c29f8e5092c17fafda2a18e2af5a0b004043c1e55\": container with ID starting with 5adb20b92bc42def1f8e8a0c29f8e5092c17fafda2a18e2af5a0b004043c1e55 not found: ID does not exist" containerID="5adb20b92bc42def1f8e8a0c29f8e5092c17fafda2a18e2af5a0b004043c1e55" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.402927 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5adb20b92bc42def1f8e8a0c29f8e5092c17fafda2a18e2af5a0b004043c1e55"} err="failed to get container status \"5adb20b92bc42def1f8e8a0c29f8e5092c17fafda2a18e2af5a0b004043c1e55\": rpc error: code = NotFound desc = could not find container \"5adb20b92bc42def1f8e8a0c29f8e5092c17fafda2a18e2af5a0b004043c1e55\": container with ID starting with 5adb20b92bc42def1f8e8a0c29f8e5092c17fafda2a18e2af5a0b004043c1e55 not found: ID does not exist" Oct 04 04:52:49 crc kubenswrapper[4802]: I1004 04:52:49.893038 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ch4cq"] Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.153554 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" event={"ID":"709abe0b-3c8d-4646-bdce-0a38e7e406f8","Type":"ContainerStarted","Data":"fec0f7b72fea625108c936094807dc5783a7b3cb1d4601ff2d4aff15d105bf89"} Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.153974 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.160321 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.212594 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-x2tsz" podStartSLOduration=2.212573649 podStartE2EDuration="2.212573649s" podCreationTimestamp="2025-10-04 04:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:52:50.187327515 +0000 UTC m=+412.595328140" watchObservedRunningTime="2025-10-04 04:52:50.212573649 +0000 UTC m=+412.620574264" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.367525 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" path="/var/lib/kubelet/pods/01bacf77-5ec5-42c6-af1a-d27ffc2f26e3/volumes" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.368167 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d556c4-4775-463e-bfc7-c766fa10fce2" path="/var/lib/kubelet/pods/88d556c4-4775-463e-bfc7-c766fa10fce2/volumes" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.368755 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" path="/var/lib/kubelet/pods/9f9d61a9-b985-45da-bacf-c9fb55b52b66/volumes" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.369401 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc36cbe1-f043-49df-bb90-158d61ac67ad" path="/var/lib/kubelet/pods/cc36cbe1-f043-49df-bb90-158d61ac67ad/volumes" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.369886 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d30c5f5e-8390-4c6c-9dff-07157aa29319" path="/var/lib/kubelet/pods/d30c5f5e-8390-4c6c-9dff-07157aa29319/volumes" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.469978 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nzmnj"] Oct 04 04:52:50 crc kubenswrapper[4802]: E1004 04:52:50.470247 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30c5f5e-8390-4c6c-9dff-07157aa29319" containerName="extract-utilities" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470272 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30c5f5e-8390-4c6c-9dff-07157aa29319" containerName="extract-utilities" Oct 04 04:52:50 crc kubenswrapper[4802]: E1004 04:52:50.470292 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30c5f5e-8390-4c6c-9dff-07157aa29319" containerName="extract-content" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470301 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30c5f5e-8390-4c6c-9dff-07157aa29319" containerName="extract-content" Oct 04 04:52:50 crc kubenswrapper[4802]: E1004 04:52:50.470311 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" containerName="extract-utilities" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470318 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" containerName="extract-utilities" Oct 04 04:52:50 crc kubenswrapper[4802]: E1004 04:52:50.470328 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc36cbe1-f043-49df-bb90-158d61ac67ad" containerName="marketplace-operator" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470334 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc36cbe1-f043-49df-bb90-158d61ac67ad" containerName="marketplace-operator" Oct 04 04:52:50 crc kubenswrapper[4802]: E1004 04:52:50.470344 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d556c4-4775-463e-bfc7-c766fa10fce2" containerName="extract-utilities" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470349 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d556c4-4775-463e-bfc7-c766fa10fce2" containerName="extract-utilities" Oct 04 04:52:50 crc kubenswrapper[4802]: E1004 04:52:50.470358 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" containerName="extract-content" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470363 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" containerName="extract-content" Oct 04 04:52:50 crc kubenswrapper[4802]: E1004 04:52:50.470372 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30c5f5e-8390-4c6c-9dff-07157aa29319" containerName="registry-server" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470379 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30c5f5e-8390-4c6c-9dff-07157aa29319" containerName="registry-server" Oct 04 04:52:50 crc kubenswrapper[4802]: E1004 04:52:50.470390 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" containerName="extract-utilities" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470397 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" containerName="extract-utilities" Oct 04 04:52:50 crc kubenswrapper[4802]: E1004 04:52:50.470408 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d556c4-4775-463e-bfc7-c766fa10fce2" containerName="extract-content" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470420 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d556c4-4775-463e-bfc7-c766fa10fce2" containerName="extract-content" Oct 04 04:52:50 crc kubenswrapper[4802]: E1004 04:52:50.470431 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" containerName="extract-content" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470438 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" containerName="extract-content" Oct 04 04:52:50 crc kubenswrapper[4802]: E1004 04:52:50.470450 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" containerName="registry-server" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470456 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" containerName="registry-server" Oct 04 04:52:50 crc kubenswrapper[4802]: E1004 04:52:50.470467 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" containerName="registry-server" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470474 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" containerName="registry-server" Oct 04 04:52:50 crc kubenswrapper[4802]: E1004 04:52:50.470489 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d556c4-4775-463e-bfc7-c766fa10fce2" containerName="registry-server" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470499 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d556c4-4775-463e-bfc7-c766fa10fce2" containerName="registry-server" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470605 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9d61a9-b985-45da-bacf-c9fb55b52b66" containerName="registry-server" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470622 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="d30c5f5e-8390-4c6c-9dff-07157aa29319" containerName="registry-server" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470634 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc36cbe1-f043-49df-bb90-158d61ac67ad" containerName="marketplace-operator" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470660 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d556c4-4775-463e-bfc7-c766fa10fce2" containerName="registry-server" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.470667 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bacf77-5ec5-42c6-af1a-d27ffc2f26e3" containerName="registry-server" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.471455 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.474625 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.490313 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzmnj"] Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.636783 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02053d7-6f09-4c04-a01d-af9a90812a86-catalog-content\") pod \"redhat-marketplace-nzmnj\" (UID: \"a02053d7-6f09-4c04-a01d-af9a90812a86\") " pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.636830 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzl2b\" (UniqueName: \"kubernetes.io/projected/a02053d7-6f09-4c04-a01d-af9a90812a86-kube-api-access-pzl2b\") pod \"redhat-marketplace-nzmnj\" (UID: \"a02053d7-6f09-4c04-a01d-af9a90812a86\") " pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.636905 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02053d7-6f09-4c04-a01d-af9a90812a86-utilities\") pod \"redhat-marketplace-nzmnj\" (UID: \"a02053d7-6f09-4c04-a01d-af9a90812a86\") " pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.669099 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tmd6j"] Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.670177 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.671944 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.685543 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tmd6j"] Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.737982 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02053d7-6f09-4c04-a01d-af9a90812a86-utilities\") pod \"redhat-marketplace-nzmnj\" (UID: \"a02053d7-6f09-4c04-a01d-af9a90812a86\") " pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.738188 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02053d7-6f09-4c04-a01d-af9a90812a86-catalog-content\") pod \"redhat-marketplace-nzmnj\" (UID: \"a02053d7-6f09-4c04-a01d-af9a90812a86\") " pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.738216 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzl2b\" (UniqueName: \"kubernetes.io/projected/a02053d7-6f09-4c04-a01d-af9a90812a86-kube-api-access-pzl2b\") pod \"redhat-marketplace-nzmnj\" (UID: \"a02053d7-6f09-4c04-a01d-af9a90812a86\") " pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.738616 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02053d7-6f09-4c04-a01d-af9a90812a86-utilities\") pod \"redhat-marketplace-nzmnj\" (UID: \"a02053d7-6f09-4c04-a01d-af9a90812a86\") " pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.738859 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02053d7-6f09-4c04-a01d-af9a90812a86-catalog-content\") pod \"redhat-marketplace-nzmnj\" (UID: \"a02053d7-6f09-4c04-a01d-af9a90812a86\") " pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.763743 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzl2b\" (UniqueName: \"kubernetes.io/projected/a02053d7-6f09-4c04-a01d-af9a90812a86-kube-api-access-pzl2b\") pod \"redhat-marketplace-nzmnj\" (UID: \"a02053d7-6f09-4c04-a01d-af9a90812a86\") " pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.788151 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.839021 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b222c027-fa4e-4fd2-bb99-bf44d6c44d5d-utilities\") pod \"redhat-operators-tmd6j\" (UID: \"b222c027-fa4e-4fd2-bb99-bf44d6c44d5d\") " pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.839423 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsnz6\" (UniqueName: \"kubernetes.io/projected/b222c027-fa4e-4fd2-bb99-bf44d6c44d5d-kube-api-access-nsnz6\") pod \"redhat-operators-tmd6j\" (UID: \"b222c027-fa4e-4fd2-bb99-bf44d6c44d5d\") " pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.839460 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b222c027-fa4e-4fd2-bb99-bf44d6c44d5d-catalog-content\") pod \"redhat-operators-tmd6j\" (UID: \"b222c027-fa4e-4fd2-bb99-bf44d6c44d5d\") " pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.940897 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsnz6\" (UniqueName: \"kubernetes.io/projected/b222c027-fa4e-4fd2-bb99-bf44d6c44d5d-kube-api-access-nsnz6\") pod \"redhat-operators-tmd6j\" (UID: \"b222c027-fa4e-4fd2-bb99-bf44d6c44d5d\") " pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.940981 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b222c027-fa4e-4fd2-bb99-bf44d6c44d5d-catalog-content\") pod \"redhat-operators-tmd6j\" (UID: \"b222c027-fa4e-4fd2-bb99-bf44d6c44d5d\") " pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.941042 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b222c027-fa4e-4fd2-bb99-bf44d6c44d5d-utilities\") pod \"redhat-operators-tmd6j\" (UID: \"b222c027-fa4e-4fd2-bb99-bf44d6c44d5d\") " pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.941803 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b222c027-fa4e-4fd2-bb99-bf44d6c44d5d-catalog-content\") pod \"redhat-operators-tmd6j\" (UID: \"b222c027-fa4e-4fd2-bb99-bf44d6c44d5d\") " pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.941872 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b222c027-fa4e-4fd2-bb99-bf44d6c44d5d-utilities\") pod \"redhat-operators-tmd6j\" (UID: \"b222c027-fa4e-4fd2-bb99-bf44d6c44d5d\") " pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.959809 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsnz6\" (UniqueName: \"kubernetes.io/projected/b222c027-fa4e-4fd2-bb99-bf44d6c44d5d-kube-api-access-nsnz6\") pod \"redhat-operators-tmd6j\" (UID: \"b222c027-fa4e-4fd2-bb99-bf44d6c44d5d\") " pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:52:50 crc kubenswrapper[4802]: I1004 04:52:50.985785 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:52:51 crc kubenswrapper[4802]: I1004 04:52:51.187107 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tmd6j"] Oct 04 04:52:51 crc kubenswrapper[4802]: W1004 04:52:51.200823 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb222c027_fa4e_4fd2_bb99_bf44d6c44d5d.slice/crio-aa2e1e249610d9aaa52fd43ce1525ba01dc18eaafcea6f03082b8f8fdddd938e WatchSource:0}: Error finding container aa2e1e249610d9aaa52fd43ce1525ba01dc18eaafcea6f03082b8f8fdddd938e: Status 404 returned error can't find the container with id aa2e1e249610d9aaa52fd43ce1525ba01dc18eaafcea6f03082b8f8fdddd938e Oct 04 04:52:51 crc kubenswrapper[4802]: I1004 04:52:51.219546 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzmnj"] Oct 04 04:52:51 crc kubenswrapper[4802]: W1004 04:52:51.227175 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda02053d7_6f09_4c04_a01d_af9a90812a86.slice/crio-284ee11a300f6b0cb02deb494fe1fe94742ba264755d71eaf4d306376e45cb2a WatchSource:0}: Error finding container 284ee11a300f6b0cb02deb494fe1fe94742ba264755d71eaf4d306376e45cb2a: Status 404 returned error can't find the container with id 284ee11a300f6b0cb02deb494fe1fe94742ba264755d71eaf4d306376e45cb2a Oct 04 04:52:52 crc kubenswrapper[4802]: I1004 04:52:52.172342 4802 generic.go:334] "Generic (PLEG): container finished" podID="a02053d7-6f09-4c04-a01d-af9a90812a86" containerID="3b0db14a5798f8e6cb985956f4660c147bbe96d86477b2e415fbb0284ee965cc" exitCode=0 Oct 04 04:52:52 crc kubenswrapper[4802]: I1004 04:52:52.172819 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzmnj" event={"ID":"a02053d7-6f09-4c04-a01d-af9a90812a86","Type":"ContainerDied","Data":"3b0db14a5798f8e6cb985956f4660c147bbe96d86477b2e415fbb0284ee965cc"} Oct 04 04:52:52 crc kubenswrapper[4802]: I1004 04:52:52.172874 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzmnj" event={"ID":"a02053d7-6f09-4c04-a01d-af9a90812a86","Type":"ContainerStarted","Data":"284ee11a300f6b0cb02deb494fe1fe94742ba264755d71eaf4d306376e45cb2a"} Oct 04 04:52:52 crc kubenswrapper[4802]: I1004 04:52:52.175131 4802 generic.go:334] "Generic (PLEG): container finished" podID="b222c027-fa4e-4fd2-bb99-bf44d6c44d5d" containerID="53d57d312db927402282d594fa81d0da668105a7892972cfc9d120c7ec459824" exitCode=0 Oct 04 04:52:52 crc kubenswrapper[4802]: I1004 04:52:52.176377 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmd6j" event={"ID":"b222c027-fa4e-4fd2-bb99-bf44d6c44d5d","Type":"ContainerDied","Data":"53d57d312db927402282d594fa81d0da668105a7892972cfc9d120c7ec459824"} Oct 04 04:52:52 crc kubenswrapper[4802]: I1004 04:52:52.176408 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmd6j" event={"ID":"b222c027-fa4e-4fd2-bb99-bf44d6c44d5d","Type":"ContainerStarted","Data":"aa2e1e249610d9aaa52fd43ce1525ba01dc18eaafcea6f03082b8f8fdddd938e"} Oct 04 04:52:52 crc kubenswrapper[4802]: I1004 04:52:52.662721 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 04:52:52 crc kubenswrapper[4802]: I1004 04:52:52.662802 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 04:52:52 crc kubenswrapper[4802]: I1004 04:52:52.869391 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vl5xn"] Oct 04 04:52:52 crc kubenswrapper[4802]: I1004 04:52:52.870900 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:52:52 crc kubenswrapper[4802]: I1004 04:52:52.874277 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 04 04:52:52 crc kubenswrapper[4802]: I1004 04:52:52.884608 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vl5xn"] Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.069270 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wb9js"] Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.071824 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.077765 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfwzd\" (UniqueName: \"kubernetes.io/projected/0f090852-6771-4013-9a95-c1c0d1bd656d-kube-api-access-hfwzd\") pod \"certified-operators-wb9js\" (UID: \"0f090852-6771-4013-9a95-c1c0d1bd656d\") " pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.077792 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.077901 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-catalog-content\") pod \"community-operators-vl5xn\" (UID: \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\") " pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.077965 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-utilities\") pod \"community-operators-vl5xn\" (UID: \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\") " pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.078023 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f090852-6771-4013-9a95-c1c0d1bd656d-utilities\") pod \"certified-operators-wb9js\" (UID: \"0f090852-6771-4013-9a95-c1c0d1bd656d\") " pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.078078 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f090852-6771-4013-9a95-c1c0d1bd656d-catalog-content\") pod \"certified-operators-wb9js\" (UID: \"0f090852-6771-4013-9a95-c1c0d1bd656d\") " pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.078139 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nswm4\" (UniqueName: \"kubernetes.io/projected/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-kube-api-access-nswm4\") pod \"community-operators-vl5xn\" (UID: \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\") " pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.101444 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wb9js"] Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.178993 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfwzd\" (UniqueName: \"kubernetes.io/projected/0f090852-6771-4013-9a95-c1c0d1bd656d-kube-api-access-hfwzd\") pod \"certified-operators-wb9js\" (UID: \"0f090852-6771-4013-9a95-c1c0d1bd656d\") " pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.179074 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-catalog-content\") pod \"community-operators-vl5xn\" (UID: \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\") " pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.179105 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-utilities\") pod \"community-operators-vl5xn\" (UID: \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\") " pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.179137 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f090852-6771-4013-9a95-c1c0d1bd656d-utilities\") pod \"certified-operators-wb9js\" (UID: \"0f090852-6771-4013-9a95-c1c0d1bd656d\") " pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.179161 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f090852-6771-4013-9a95-c1c0d1bd656d-catalog-content\") pod \"certified-operators-wb9js\" (UID: \"0f090852-6771-4013-9a95-c1c0d1bd656d\") " pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.179183 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nswm4\" (UniqueName: \"kubernetes.io/projected/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-kube-api-access-nswm4\") pod \"community-operators-vl5xn\" (UID: \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\") " pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.179962 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f090852-6771-4013-9a95-c1c0d1bd656d-utilities\") pod \"certified-operators-wb9js\" (UID: \"0f090852-6771-4013-9a95-c1c0d1bd656d\") " pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.179999 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f090852-6771-4013-9a95-c1c0d1bd656d-catalog-content\") pod \"certified-operators-wb9js\" (UID: \"0f090852-6771-4013-9a95-c1c0d1bd656d\") " pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.179970 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-utilities\") pod \"community-operators-vl5xn\" (UID: \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\") " pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.181559 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-catalog-content\") pod \"community-operators-vl5xn\" (UID: \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\") " pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.198947 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nswm4\" (UniqueName: \"kubernetes.io/projected/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-kube-api-access-nswm4\") pod \"community-operators-vl5xn\" (UID: \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\") " pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.200561 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfwzd\" (UniqueName: \"kubernetes.io/projected/0f090852-6771-4013-9a95-c1c0d1bd656d-kube-api-access-hfwzd\") pod \"certified-operators-wb9js\" (UID: \"0f090852-6771-4013-9a95-c1c0d1bd656d\") " pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.405499 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.496315 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.640002 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wb9js"] Oct 04 04:52:53 crc kubenswrapper[4802]: W1004 04:52:53.651861 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f090852_6771_4013_9a95_c1c0d1bd656d.slice/crio-49a087330d42968787849e1b0113b426117690ad86ec9ff1c79705568463be92 WatchSource:0}: Error finding container 49a087330d42968787849e1b0113b426117690ad86ec9ff1c79705568463be92: Status 404 returned error can't find the container with id 49a087330d42968787849e1b0113b426117690ad86ec9ff1c79705568463be92 Oct 04 04:52:53 crc kubenswrapper[4802]: I1004 04:52:53.726024 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vl5xn"] Oct 04 04:52:53 crc kubenswrapper[4802]: W1004 04:52:53.733848 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d3e7f92_a81f_46c2_aadf_7766cb10e2e3.slice/crio-5fb00b5f34f277d62ac59a5e1e1a67c162a33e3e2674e4d1de10025f4b0b8525 WatchSource:0}: Error finding container 5fb00b5f34f277d62ac59a5e1e1a67c162a33e3e2674e4d1de10025f4b0b8525: Status 404 returned error can't find the container with id 5fb00b5f34f277d62ac59a5e1e1a67c162a33e3e2674e4d1de10025f4b0b8525 Oct 04 04:52:54 crc kubenswrapper[4802]: I1004 04:52:54.192161 4802 generic.go:334] "Generic (PLEG): container finished" podID="a02053d7-6f09-4c04-a01d-af9a90812a86" containerID="d32c64e8914364b8c4c4a08d75662e0e15e1b85fa464e6d0e8f9dc6d189b330a" exitCode=0 Oct 04 04:52:54 crc kubenswrapper[4802]: I1004 04:52:54.192348 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzmnj" event={"ID":"a02053d7-6f09-4c04-a01d-af9a90812a86","Type":"ContainerDied","Data":"d32c64e8914364b8c4c4a08d75662e0e15e1b85fa464e6d0e8f9dc6d189b330a"} Oct 04 04:52:54 crc kubenswrapper[4802]: I1004 04:52:54.197579 4802 generic.go:334] "Generic (PLEG): container finished" podID="5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" containerID="44cd917cdaa66ef569c085783429c8dec040bfe40d089281b92bbd3dd73e9eeb" exitCode=0 Oct 04 04:52:54 crc kubenswrapper[4802]: I1004 04:52:54.197889 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl5xn" event={"ID":"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3","Type":"ContainerDied","Data":"44cd917cdaa66ef569c085783429c8dec040bfe40d089281b92bbd3dd73e9eeb"} Oct 04 04:52:54 crc kubenswrapper[4802]: I1004 04:52:54.197995 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl5xn" event={"ID":"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3","Type":"ContainerStarted","Data":"5fb00b5f34f277d62ac59a5e1e1a67c162a33e3e2674e4d1de10025f4b0b8525"} Oct 04 04:52:54 crc kubenswrapper[4802]: I1004 04:52:54.201238 4802 generic.go:334] "Generic (PLEG): container finished" podID="b222c027-fa4e-4fd2-bb99-bf44d6c44d5d" containerID="485e832d7d5562196b4b1b47dbd9418f3027bdd7de5e5033020405843981d9d1" exitCode=0 Oct 04 04:52:54 crc kubenswrapper[4802]: I1004 04:52:54.201276 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmd6j" event={"ID":"b222c027-fa4e-4fd2-bb99-bf44d6c44d5d","Type":"ContainerDied","Data":"485e832d7d5562196b4b1b47dbd9418f3027bdd7de5e5033020405843981d9d1"} Oct 04 04:52:54 crc kubenswrapper[4802]: I1004 04:52:54.208226 4802 generic.go:334] "Generic (PLEG): container finished" podID="0f090852-6771-4013-9a95-c1c0d1bd656d" containerID="d0bd9c09278cb288962cb0c39ff3d8edfe210344ce2538a5b79eaea8953ed83d" exitCode=0 Oct 04 04:52:54 crc kubenswrapper[4802]: I1004 04:52:54.208286 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wb9js" event={"ID":"0f090852-6771-4013-9a95-c1c0d1bd656d","Type":"ContainerDied","Data":"d0bd9c09278cb288962cb0c39ff3d8edfe210344ce2538a5b79eaea8953ed83d"} Oct 04 04:52:54 crc kubenswrapper[4802]: I1004 04:52:54.208348 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wb9js" event={"ID":"0f090852-6771-4013-9a95-c1c0d1bd656d","Type":"ContainerStarted","Data":"49a087330d42968787849e1b0113b426117690ad86ec9ff1c79705568463be92"} Oct 04 04:52:55 crc kubenswrapper[4802]: I1004 04:52:55.220599 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzmnj" event={"ID":"a02053d7-6f09-4c04-a01d-af9a90812a86","Type":"ContainerStarted","Data":"869279f1bcf6f9dd222f1f932de1b2961cbccdf2c835442230a8c6ca73bada22"} Oct 04 04:52:55 crc kubenswrapper[4802]: I1004 04:52:55.242388 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nzmnj" podStartSLOduration=2.684565884 podStartE2EDuration="5.242367721s" podCreationTimestamp="2025-10-04 04:52:50 +0000 UTC" firstStartedPulling="2025-10-04 04:52:52.175101425 +0000 UTC m=+414.583102040" lastFinishedPulling="2025-10-04 04:52:54.732903252 +0000 UTC m=+417.140903877" observedRunningTime="2025-10-04 04:52:55.241589975 +0000 UTC m=+417.649590610" watchObservedRunningTime="2025-10-04 04:52:55.242367721 +0000 UTC m=+417.650368346" Oct 04 04:52:56 crc kubenswrapper[4802]: I1004 04:52:56.228248 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmd6j" event={"ID":"b222c027-fa4e-4fd2-bb99-bf44d6c44d5d","Type":"ContainerStarted","Data":"1daba1eccd541e54e4658402668777bebfe92ba95bafba27c3bb87de4a010945"} Oct 04 04:52:56 crc kubenswrapper[4802]: I1004 04:52:56.232071 4802 generic.go:334] "Generic (PLEG): container finished" podID="0f090852-6771-4013-9a95-c1c0d1bd656d" containerID="8dad724a018f631d0e9e3e22c907ff491fe1367d735f27c2ecdde85552b56741" exitCode=0 Oct 04 04:52:56 crc kubenswrapper[4802]: I1004 04:52:56.232300 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wb9js" event={"ID":"0f090852-6771-4013-9a95-c1c0d1bd656d","Type":"ContainerDied","Data":"8dad724a018f631d0e9e3e22c907ff491fe1367d735f27c2ecdde85552b56741"} Oct 04 04:52:56 crc kubenswrapper[4802]: I1004 04:52:56.253957 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tmd6j" podStartSLOduration=3.4992752019999998 podStartE2EDuration="6.253932869s" podCreationTimestamp="2025-10-04 04:52:50 +0000 UTC" firstStartedPulling="2025-10-04 04:52:52.178858677 +0000 UTC m=+414.586859302" lastFinishedPulling="2025-10-04 04:52:54.933516344 +0000 UTC m=+417.341516969" observedRunningTime="2025-10-04 04:52:56.250194547 +0000 UTC m=+418.658195192" watchObservedRunningTime="2025-10-04 04:52:56.253932869 +0000 UTC m=+418.661933484" Oct 04 04:52:57 crc kubenswrapper[4802]: I1004 04:52:57.241281 4802 generic.go:334] "Generic (PLEG): container finished" podID="5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" containerID="abe3dc2c715785e5857c5d0d7a18e2f06ed589c36ed28e5e6b6700b0f8b9bdec" exitCode=0 Oct 04 04:52:57 crc kubenswrapper[4802]: I1004 04:52:57.241385 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl5xn" event={"ID":"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3","Type":"ContainerDied","Data":"abe3dc2c715785e5857c5d0d7a18e2f06ed589c36ed28e5e6b6700b0f8b9bdec"} Oct 04 04:52:58 crc kubenswrapper[4802]: I1004 04:52:58.254494 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wb9js" event={"ID":"0f090852-6771-4013-9a95-c1c0d1bd656d","Type":"ContainerStarted","Data":"4c23e76813a8593f4b7a74f3f8f16b9b9c32b4c158fd33485baf03add68cccf2"} Oct 04 04:52:58 crc kubenswrapper[4802]: I1004 04:52:58.273579 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wb9js" podStartSLOduration=2.469579041 podStartE2EDuration="5.273562069s" podCreationTimestamp="2025-10-04 04:52:53 +0000 UTC" firstStartedPulling="2025-10-04 04:52:54.210092707 +0000 UTC m=+416.618093332" lastFinishedPulling="2025-10-04 04:52:57.014075735 +0000 UTC m=+419.422076360" observedRunningTime="2025-10-04 04:52:58.271799571 +0000 UTC m=+420.679800236" watchObservedRunningTime="2025-10-04 04:52:58.273562069 +0000 UTC m=+420.681562694" Oct 04 04:53:00 crc kubenswrapper[4802]: I1004 04:53:00.277131 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl5xn" event={"ID":"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3","Type":"ContainerStarted","Data":"fc8246cbc61cfd24d70a35ed30bf887a6e489ba2384eb729d171ea35c354db96"} Oct 04 04:53:00 crc kubenswrapper[4802]: I1004 04:53:00.295741 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vl5xn" podStartSLOduration=3.3510645009999998 podStartE2EDuration="8.295721753s" podCreationTimestamp="2025-10-04 04:52:52 +0000 UTC" firstStartedPulling="2025-10-04 04:52:54.199736608 +0000 UTC m=+416.607737233" lastFinishedPulling="2025-10-04 04:52:59.14439382 +0000 UTC m=+421.552394485" observedRunningTime="2025-10-04 04:53:00.293249802 +0000 UTC m=+422.701250437" watchObservedRunningTime="2025-10-04 04:53:00.295721753 +0000 UTC m=+422.703722378" Oct 04 04:53:00 crc kubenswrapper[4802]: I1004 04:53:00.788918 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:53:00 crc kubenswrapper[4802]: I1004 04:53:00.788988 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:53:00 crc kubenswrapper[4802]: I1004 04:53:00.831320 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:53:00 crc kubenswrapper[4802]: I1004 04:53:00.986890 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:53:00 crc kubenswrapper[4802]: I1004 04:53:00.986960 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:53:01 crc kubenswrapper[4802]: I1004 04:53:01.033399 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:53:01 crc kubenswrapper[4802]: I1004 04:53:01.326031 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nzmnj" Oct 04 04:53:01 crc kubenswrapper[4802]: I1004 04:53:01.332420 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tmd6j" Oct 04 04:53:03 crc kubenswrapper[4802]: I1004 04:53:03.405949 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:53:03 crc kubenswrapper[4802]: I1004 04:53:03.406321 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:53:03 crc kubenswrapper[4802]: I1004 04:53:03.453050 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:53:03 crc kubenswrapper[4802]: I1004 04:53:03.497719 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:53:03 crc kubenswrapper[4802]: I1004 04:53:03.497779 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:53:03 crc kubenswrapper[4802]: I1004 04:53:03.546654 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:53:04 crc kubenswrapper[4802]: I1004 04:53:04.347601 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wb9js" Oct 04 04:53:13 crc kubenswrapper[4802]: I1004 04:53:13.538212 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vl5xn" Oct 04 04:53:14 crc kubenswrapper[4802]: I1004 04:53:14.925615 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" podUID="ac6100d3-2668-4b1e-a78a-6f0703eca64a" containerName="oauth-openshift" containerID="cri-o://aa04f9c080906b5127cf78a391dcb0df49c634fd10816a716bd3212da631351b" gracePeriod=15 Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.305064 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.344678 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-574b75df8-j29dt"] Oct 04 04:53:15 crc kubenswrapper[4802]: E1004 04:53:15.344973 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac6100d3-2668-4b1e-a78a-6f0703eca64a" containerName="oauth-openshift" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.344992 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac6100d3-2668-4b1e-a78a-6f0703eca64a" containerName="oauth-openshift" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.345144 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac6100d3-2668-4b1e-a78a-6f0703eca64a" containerName="oauth-openshift" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.345723 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.360264 4802 generic.go:334] "Generic (PLEG): container finished" podID="ac6100d3-2668-4b1e-a78a-6f0703eca64a" containerID="aa04f9c080906b5127cf78a391dcb0df49c634fd10816a716bd3212da631351b" exitCode=0 Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.360325 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" event={"ID":"ac6100d3-2668-4b1e-a78a-6f0703eca64a","Type":"ContainerDied","Data":"aa04f9c080906b5127cf78a391dcb0df49c634fd10816a716bd3212da631351b"} Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.360353 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.360386 4802 scope.go:117] "RemoveContainer" containerID="aa04f9c080906b5127cf78a391dcb0df49c634fd10816a716bd3212da631351b" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.360371 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ch4cq" event={"ID":"ac6100d3-2668-4b1e-a78a-6f0703eca64a","Type":"ContainerDied","Data":"c79066d6812d5383d6b2dd4109e851cbc2b6dcee27a7233617a159960c872491"} Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.368540 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-574b75df8-j29dt"] Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.379634 4802 scope.go:117] "RemoveContainer" containerID="aa04f9c080906b5127cf78a391dcb0df49c634fd10816a716bd3212da631351b" Oct 04 04:53:15 crc kubenswrapper[4802]: E1004 04:53:15.384896 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa04f9c080906b5127cf78a391dcb0df49c634fd10816a716bd3212da631351b\": container with ID starting with aa04f9c080906b5127cf78a391dcb0df49c634fd10816a716bd3212da631351b not found: ID does not exist" containerID="aa04f9c080906b5127cf78a391dcb0df49c634fd10816a716bd3212da631351b" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.384950 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa04f9c080906b5127cf78a391dcb0df49c634fd10816a716bd3212da631351b"} err="failed to get container status \"aa04f9c080906b5127cf78a391dcb0df49c634fd10816a716bd3212da631351b\": rpc error: code = NotFound desc = could not find container \"aa04f9c080906b5127cf78a391dcb0df49c634fd10816a716bd3212da631351b\": container with ID starting with aa04f9c080906b5127cf78a391dcb0df49c634fd10816a716bd3212da631351b not found: ID does not exist" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.487408 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-cliconfig\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.487897 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-trusted-ca-bundle\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.488429 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.488508 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.488685 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac6100d3-2668-4b1e-a78a-6f0703eca64a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.488591 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac6100d3-2668-4b1e-a78a-6f0703eca64a-audit-dir\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.489103 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-router-certs\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.490015 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-audit-policies\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.490116 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-session\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.490399 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.490530 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlgz4\" (UniqueName: \"kubernetes.io/projected/ac6100d3-2668-4b1e-a78a-6f0703eca64a-kube-api-access-jlgz4\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.490673 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-provider-selection\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.490778 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-ocp-branding-template\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.490877 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-service-ca\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.490991 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-error\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.491086 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-serving-cert\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.491190 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-login\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.491288 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-idp-0-file-data\") pod \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\" (UID: \"ac6100d3-2668-4b1e-a78a-6f0703eca64a\") " Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.491523 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxdlb\" (UniqueName: \"kubernetes.io/projected/12635a42-59a0-42ec-82f7-14b2a037e4c0-kube-api-access-dxdlb\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.491603 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/12635a42-59a0-42ec-82f7-14b2a037e4c0-audit-dir\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.491735 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-user-template-login\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.491842 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-router-certs\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.491934 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-service-ca\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.492040 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.492112 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.492195 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-session\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.492297 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.492426 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/12635a42-59a0-42ec-82f7-14b2a037e4c0-audit-policies\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.492535 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.492658 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.492775 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-user-template-error\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.492917 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.493101 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.493216 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.493320 4802 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac6100d3-2668-4b1e-a78a-6f0703eca64a-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.493425 4802 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.494322 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.498275 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.499338 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.500130 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.501740 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac6100d3-2668-4b1e-a78a-6f0703eca64a-kube-api-access-jlgz4" (OuterVolumeSpecName: "kube-api-access-jlgz4") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "kube-api-access-jlgz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.501835 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.502105 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.502483 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.503152 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.503325 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ac6100d3-2668-4b1e-a78a-6f0703eca64a" (UID: "ac6100d3-2668-4b1e-a78a-6f0703eca64a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.594790 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.594857 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.594882 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-user-template-error\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.594902 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.594945 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxdlb\" (UniqueName: \"kubernetes.io/projected/12635a42-59a0-42ec-82f7-14b2a037e4c0-kube-api-access-dxdlb\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.594969 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/12635a42-59a0-42ec-82f7-14b2a037e4c0-audit-dir\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.594998 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-user-template-login\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595022 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-router-certs\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595040 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-service-ca\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595064 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595080 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595100 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-session\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595119 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595147 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/12635a42-59a0-42ec-82f7-14b2a037e4c0-audit-policies\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595187 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595198 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595208 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlgz4\" (UniqueName: \"kubernetes.io/projected/ac6100d3-2668-4b1e-a78a-6f0703eca64a-kube-api-access-jlgz4\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595199 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/12635a42-59a0-42ec-82f7-14b2a037e4c0-audit-dir\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595221 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595352 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595376 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595400 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595423 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595441 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.595460 4802 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac6100d3-2668-4b1e-a78a-6f0703eca64a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.596309 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-service-ca\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.596329 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/12635a42-59a0-42ec-82f7-14b2a037e4c0-audit-policies\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.596364 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.596541 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.598734 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.599098 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-user-template-error\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.599785 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-user-template-login\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.599904 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.599921 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-router-certs\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.601608 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.606266 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-system-session\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.606857 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/12635a42-59a0-42ec-82f7-14b2a037e4c0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.615050 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxdlb\" (UniqueName: \"kubernetes.io/projected/12635a42-59a0-42ec-82f7-14b2a037e4c0-kube-api-access-dxdlb\") pod \"oauth-openshift-574b75df8-j29dt\" (UID: \"12635a42-59a0-42ec-82f7-14b2a037e4c0\") " pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.663931 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.695497 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ch4cq"] Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.699016 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ch4cq"] Oct 04 04:53:15 crc kubenswrapper[4802]: I1004 04:53:15.879163 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-574b75df8-j29dt"] Oct 04 04:53:15 crc kubenswrapper[4802]: W1004 04:53:15.884747 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12635a42_59a0_42ec_82f7_14b2a037e4c0.slice/crio-516a8691144e271f90d7c67979679fb4fa0b19b4c54883906fb6edb96429c1c2 WatchSource:0}: Error finding container 516a8691144e271f90d7c67979679fb4fa0b19b4c54883906fb6edb96429c1c2: Status 404 returned error can't find the container with id 516a8691144e271f90d7c67979679fb4fa0b19b4c54883906fb6edb96429c1c2 Oct 04 04:53:16 crc kubenswrapper[4802]: I1004 04:53:16.367305 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac6100d3-2668-4b1e-a78a-6f0703eca64a" path="/var/lib/kubelet/pods/ac6100d3-2668-4b1e-a78a-6f0703eca64a/volumes" Oct 04 04:53:16 crc kubenswrapper[4802]: I1004 04:53:16.369264 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" event={"ID":"12635a42-59a0-42ec-82f7-14b2a037e4c0","Type":"ContainerStarted","Data":"808a47692668688407b35b710f874bdbddb85a7163c408a0e1d22da8da2007db"} Oct 04 04:53:16 crc kubenswrapper[4802]: I1004 04:53:16.369325 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" event={"ID":"12635a42-59a0-42ec-82f7-14b2a037e4c0","Type":"ContainerStarted","Data":"516a8691144e271f90d7c67979679fb4fa0b19b4c54883906fb6edb96429c1c2"} Oct 04 04:53:16 crc kubenswrapper[4802]: I1004 04:53:16.369755 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:16 crc kubenswrapper[4802]: I1004 04:53:16.425094 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" podStartSLOduration=27.425065614 podStartE2EDuration="27.425065614s" podCreationTimestamp="2025-10-04 04:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:53:16.401324588 +0000 UTC m=+438.809325213" watchObservedRunningTime="2025-10-04 04:53:16.425065614 +0000 UTC m=+438.833066229" Oct 04 04:53:16 crc kubenswrapper[4802]: I1004 04:53:16.626493 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-574b75df8-j29dt" Oct 04 04:53:22 crc kubenswrapper[4802]: I1004 04:53:22.662843 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 04:53:22 crc kubenswrapper[4802]: I1004 04:53:22.663266 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 04:53:29 crc kubenswrapper[4802]: I1004 04:53:29.950268 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nc7x2"] Oct 04 04:53:29 crc kubenswrapper[4802]: I1004 04:53:29.951705 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:29 crc kubenswrapper[4802]: I1004 04:53:29.968960 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nc7x2"] Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.008386 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-trusted-ca\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.008448 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdvkg\" (UniqueName: \"kubernetes.io/projected/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-kube-api-access-cdvkg\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.008494 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.008553 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-bound-sa-token\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.008585 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.008608 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-registry-certificates\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.008672 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.008705 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-registry-tls\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.064492 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.110433 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-registry-tls\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.110859 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-trusted-ca\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.110948 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdvkg\" (UniqueName: \"kubernetes.io/projected/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-kube-api-access-cdvkg\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.111035 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.111148 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-bound-sa-token\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.111228 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.111321 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-registry-certificates\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.112324 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.112856 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-registry-certificates\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.113318 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-trusted-ca\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.119245 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.119736 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-registry-tls\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.128573 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-bound-sa-token\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.130257 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdvkg\" (UniqueName: \"kubernetes.io/projected/f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9-kube-api-access-cdvkg\") pod \"image-registry-66df7c8f76-nc7x2\" (UID: \"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.272899 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:30 crc kubenswrapper[4802]: I1004 04:53:30.478468 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nc7x2"] Oct 04 04:53:31 crc kubenswrapper[4802]: I1004 04:53:31.458741 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" event={"ID":"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9","Type":"ContainerStarted","Data":"40b1d52a69fcc095be65c2cb3e28555c56085ced9fb4a9d034ff673809e86275"} Oct 04 04:53:31 crc kubenswrapper[4802]: I1004 04:53:31.459096 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" event={"ID":"f02d36d9-3e80-4e81-825b-bf1dd1aa3fe9","Type":"ContainerStarted","Data":"0410f516d4f78cb01b53bcff7b66e25ce15761042c10c7e1a801a427f2fababd"} Oct 04 04:53:31 crc kubenswrapper[4802]: I1004 04:53:31.459119 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:31 crc kubenswrapper[4802]: I1004 04:53:31.480170 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" podStartSLOduration=2.480146149 podStartE2EDuration="2.480146149s" podCreationTimestamp="2025-10-04 04:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:53:31.477946096 +0000 UTC m=+453.885946741" watchObservedRunningTime="2025-10-04 04:53:31.480146149 +0000 UTC m=+453.888146774" Oct 04 04:53:50 crc kubenswrapper[4802]: I1004 04:53:50.278281 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nc7x2" Oct 04 04:53:50 crc kubenswrapper[4802]: I1004 04:53:50.334706 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-69tth"] Oct 04 04:53:52 crc kubenswrapper[4802]: I1004 04:53:52.664074 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 04:53:52 crc kubenswrapper[4802]: I1004 04:53:52.664672 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 04:53:52 crc kubenswrapper[4802]: I1004 04:53:52.664735 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:53:52 crc kubenswrapper[4802]: I1004 04:53:52.666040 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d0fec1919ce376bb23a83cbe1bd76cccaab831eee4aaa8ba10e1c4573fa8eff"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 04:53:52 crc kubenswrapper[4802]: I1004 04:53:52.666112 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://5d0fec1919ce376bb23a83cbe1bd76cccaab831eee4aaa8ba10e1c4573fa8eff" gracePeriod=600 Oct 04 04:53:53 crc kubenswrapper[4802]: I1004 04:53:53.586138 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="5d0fec1919ce376bb23a83cbe1bd76cccaab831eee4aaa8ba10e1c4573fa8eff" exitCode=0 Oct 04 04:53:53 crc kubenswrapper[4802]: I1004 04:53:53.586233 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"5d0fec1919ce376bb23a83cbe1bd76cccaab831eee4aaa8ba10e1c4573fa8eff"} Oct 04 04:53:53 crc kubenswrapper[4802]: I1004 04:53:53.586560 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"42b73da467217af68ceddf6b21be981add88cb16a8423f2d9a33aa563c1a7caf"} Oct 04 04:53:53 crc kubenswrapper[4802]: I1004 04:53:53.586581 4802 scope.go:117] "RemoveContainer" containerID="33143bdabda4fa200a4effb7d99acb33f7e6471fcca782d526ec053fec4bd123" Oct 04 04:54:15 crc kubenswrapper[4802]: I1004 04:54:15.384132 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" podUID="049a575e-6351-4aa3-89b0-395dd5dc7af5" containerName="registry" containerID="cri-o://0c7a541827b8d0f3573cde1fe33445ca287969f1cade7acf6ab2d0a168052b1c" gracePeriod=30 Oct 04 04:54:15 crc kubenswrapper[4802]: I1004 04:54:15.719403 4802 generic.go:334] "Generic (PLEG): container finished" podID="049a575e-6351-4aa3-89b0-395dd5dc7af5" containerID="0c7a541827b8d0f3573cde1fe33445ca287969f1cade7acf6ab2d0a168052b1c" exitCode=0 Oct 04 04:54:15 crc kubenswrapper[4802]: I1004 04:54:15.719482 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" event={"ID":"049a575e-6351-4aa3-89b0-395dd5dc7af5","Type":"ContainerDied","Data":"0c7a541827b8d0f3573cde1fe33445ca287969f1cade7acf6ab2d0a168052b1c"} Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.243068 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.284598 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-registry-tls\") pod \"049a575e-6351-4aa3-89b0-395dd5dc7af5\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.284773 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/049a575e-6351-4aa3-89b0-395dd5dc7af5-registry-certificates\") pod \"049a575e-6351-4aa3-89b0-395dd5dc7af5\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.284836 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/049a575e-6351-4aa3-89b0-395dd5dc7af5-trusted-ca\") pod \"049a575e-6351-4aa3-89b0-395dd5dc7af5\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.284865 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54xrd\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-kube-api-access-54xrd\") pod \"049a575e-6351-4aa3-89b0-395dd5dc7af5\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.284888 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/049a575e-6351-4aa3-89b0-395dd5dc7af5-ca-trust-extracted\") pod \"049a575e-6351-4aa3-89b0-395dd5dc7af5\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.285161 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"049a575e-6351-4aa3-89b0-395dd5dc7af5\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.285197 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/049a575e-6351-4aa3-89b0-395dd5dc7af5-installation-pull-secrets\") pod \"049a575e-6351-4aa3-89b0-395dd5dc7af5\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.285226 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-bound-sa-token\") pod \"049a575e-6351-4aa3-89b0-395dd5dc7af5\" (UID: \"049a575e-6351-4aa3-89b0-395dd5dc7af5\") " Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.285901 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/049a575e-6351-4aa3-89b0-395dd5dc7af5-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "049a575e-6351-4aa3-89b0-395dd5dc7af5" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.286024 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/049a575e-6351-4aa3-89b0-395dd5dc7af5-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "049a575e-6351-4aa3-89b0-395dd5dc7af5" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.291616 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "049a575e-6351-4aa3-89b0-395dd5dc7af5" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.291893 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-kube-api-access-54xrd" (OuterVolumeSpecName: "kube-api-access-54xrd") pod "049a575e-6351-4aa3-89b0-395dd5dc7af5" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5"). InnerVolumeSpecName "kube-api-access-54xrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.292231 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049a575e-6351-4aa3-89b0-395dd5dc7af5-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "049a575e-6351-4aa3-89b0-395dd5dc7af5" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.292803 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "049a575e-6351-4aa3-89b0-395dd5dc7af5" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.303987 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/049a575e-6351-4aa3-89b0-395dd5dc7af5-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "049a575e-6351-4aa3-89b0-395dd5dc7af5" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.314461 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "049a575e-6351-4aa3-89b0-395dd5dc7af5" (UID: "049a575e-6351-4aa3-89b0-395dd5dc7af5"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.385826 4802 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.385857 4802 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/049a575e-6351-4aa3-89b0-395dd5dc7af5-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.385868 4802 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/049a575e-6351-4aa3-89b0-395dd5dc7af5-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.385876 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54xrd\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-kube-api-access-54xrd\") on node \"crc\" DevicePath \"\"" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.385885 4802 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/049a575e-6351-4aa3-89b0-395dd5dc7af5-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.385893 4802 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/049a575e-6351-4aa3-89b0-395dd5dc7af5-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.385902 4802 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/049a575e-6351-4aa3-89b0-395dd5dc7af5-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.727068 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" event={"ID":"049a575e-6351-4aa3-89b0-395dd5dc7af5","Type":"ContainerDied","Data":"3898148a7d0a0d30cadb41dc3607b30e4b5d43993ed70815f41950caf93badb0"} Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.727137 4802 scope.go:117] "RemoveContainer" containerID="0c7a541827b8d0f3573cde1fe33445ca287969f1cade7acf6ab2d0a168052b1c" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.727273 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.756843 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-69tth"] Oct 04 04:54:16 crc kubenswrapper[4802]: I1004 04:54:16.759065 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-69tth"] Oct 04 04:54:18 crc kubenswrapper[4802]: I1004 04:54:18.367015 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="049a575e-6351-4aa3-89b0-395dd5dc7af5" path="/var/lib/kubelet/pods/049a575e-6351-4aa3-89b0-395dd5dc7af5/volumes" Oct 04 04:54:21 crc kubenswrapper[4802]: I1004 04:54:21.146936 4802 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-69tth container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.33:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 04 04:54:21 crc kubenswrapper[4802]: I1004 04:54:21.147296 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-69tth" podUID="049a575e-6351-4aa3-89b0-395dd5dc7af5" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.33:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 04 04:56:22 crc kubenswrapper[4802]: I1004 04:56:22.662872 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 04:56:22 crc kubenswrapper[4802]: I1004 04:56:22.663631 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 04:56:52 crc kubenswrapper[4802]: I1004 04:56:52.662876 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 04:56:52 crc kubenswrapper[4802]: I1004 04:56:52.663502 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 04:57:22 crc kubenswrapper[4802]: I1004 04:57:22.662847 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 04:57:22 crc kubenswrapper[4802]: I1004 04:57:22.663475 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 04:57:22 crc kubenswrapper[4802]: I1004 04:57:22.663554 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 04:57:22 crc kubenswrapper[4802]: I1004 04:57:22.664534 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42b73da467217af68ceddf6b21be981add88cb16a8423f2d9a33aa563c1a7caf"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 04:57:22 crc kubenswrapper[4802]: I1004 04:57:22.664673 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://42b73da467217af68ceddf6b21be981add88cb16a8423f2d9a33aa563c1a7caf" gracePeriod=600 Oct 04 04:57:23 crc kubenswrapper[4802]: I1004 04:57:23.909410 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="42b73da467217af68ceddf6b21be981add88cb16a8423f2d9a33aa563c1a7caf" exitCode=0 Oct 04 04:57:23 crc kubenswrapper[4802]: I1004 04:57:23.909503 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"42b73da467217af68ceddf6b21be981add88cb16a8423f2d9a33aa563c1a7caf"} Oct 04 04:57:23 crc kubenswrapper[4802]: I1004 04:57:23.910032 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"4bf3a67a3aced7a776f95ac83df345cb7b69786ce2cff835d8681589db22e4b4"} Oct 04 04:57:23 crc kubenswrapper[4802]: I1004 04:57:23.910059 4802 scope.go:117] "RemoveContainer" containerID="5d0fec1919ce376bb23a83cbe1bd76cccaab831eee4aaa8ba10e1c4573fa8eff" Oct 04 04:58:35 crc kubenswrapper[4802]: I1004 04:58:35.371416 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g7767"] Oct 04 04:58:35 crc kubenswrapper[4802]: I1004 04:58:35.372220 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" podUID="77fac432-21bd-4251-bb24-320cc71f536c" containerName="controller-manager" containerID="cri-o://1498717e194fa4bc4c97b8e71ee18089f7de50c9d749534f001b8903208acdb8" gracePeriod=30 Oct 04 04:58:35 crc kubenswrapper[4802]: I1004 04:58:35.461013 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx"] Oct 04 04:58:35 crc kubenswrapper[4802]: I1004 04:58:35.461260 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" podUID="bf4310b6-043e-47e5-8519-9a513fb8da48" containerName="route-controller-manager" containerID="cri-o://f5100df8680cec1897cb6111d6fc2270f7245f73d6b7fdf7f51e567d8b954515" gracePeriod=30 Oct 04 04:58:35 crc kubenswrapper[4802]: I1004 04:58:35.807812 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:58:35 crc kubenswrapper[4802]: I1004 04:58:35.858660 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.002444 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77fac432-21bd-4251-bb24-320cc71f536c-serving-cert\") pod \"77fac432-21bd-4251-bb24-320cc71f536c\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.002498 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf4310b6-043e-47e5-8519-9a513fb8da48-serving-cert\") pod \"bf4310b6-043e-47e5-8519-9a513fb8da48\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.002527 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-client-ca\") pod \"77fac432-21bd-4251-bb24-320cc71f536c\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.002672 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-984mg\" (UniqueName: \"kubernetes.io/projected/77fac432-21bd-4251-bb24-320cc71f536c-kube-api-access-984mg\") pod \"77fac432-21bd-4251-bb24-320cc71f536c\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.002711 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-config\") pod \"77fac432-21bd-4251-bb24-320cc71f536c\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.002735 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4310b6-043e-47e5-8519-9a513fb8da48-config\") pod \"bf4310b6-043e-47e5-8519-9a513fb8da48\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.002776 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25q7d\" (UniqueName: \"kubernetes.io/projected/bf4310b6-043e-47e5-8519-9a513fb8da48-kube-api-access-25q7d\") pod \"bf4310b6-043e-47e5-8519-9a513fb8da48\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.002798 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf4310b6-043e-47e5-8519-9a513fb8da48-client-ca\") pod \"bf4310b6-043e-47e5-8519-9a513fb8da48\" (UID: \"bf4310b6-043e-47e5-8519-9a513fb8da48\") " Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.002834 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-proxy-ca-bundles\") pod \"77fac432-21bd-4251-bb24-320cc71f536c\" (UID: \"77fac432-21bd-4251-bb24-320cc71f536c\") " Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.003781 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf4310b6-043e-47e5-8519-9a513fb8da48-client-ca" (OuterVolumeSpecName: "client-ca") pod "bf4310b6-043e-47e5-8519-9a513fb8da48" (UID: "bf4310b6-043e-47e5-8519-9a513fb8da48"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.003864 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf4310b6-043e-47e5-8519-9a513fb8da48-config" (OuterVolumeSpecName: "config") pod "bf4310b6-043e-47e5-8519-9a513fb8da48" (UID: "bf4310b6-043e-47e5-8519-9a513fb8da48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.004338 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-client-ca" (OuterVolumeSpecName: "client-ca") pod "77fac432-21bd-4251-bb24-320cc71f536c" (UID: "77fac432-21bd-4251-bb24-320cc71f536c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.004368 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-config" (OuterVolumeSpecName: "config") pod "77fac432-21bd-4251-bb24-320cc71f536c" (UID: "77fac432-21bd-4251-bb24-320cc71f536c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.004742 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "77fac432-21bd-4251-bb24-320cc71f536c" (UID: "77fac432-21bd-4251-bb24-320cc71f536c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.009677 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf4310b6-043e-47e5-8519-9a513fb8da48-kube-api-access-25q7d" (OuterVolumeSpecName: "kube-api-access-25q7d") pod "bf4310b6-043e-47e5-8519-9a513fb8da48" (UID: "bf4310b6-043e-47e5-8519-9a513fb8da48"). InnerVolumeSpecName "kube-api-access-25q7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.009900 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4310b6-043e-47e5-8519-9a513fb8da48-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bf4310b6-043e-47e5-8519-9a513fb8da48" (UID: "bf4310b6-043e-47e5-8519-9a513fb8da48"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.010166 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77fac432-21bd-4251-bb24-320cc71f536c-kube-api-access-984mg" (OuterVolumeSpecName: "kube-api-access-984mg") pod "77fac432-21bd-4251-bb24-320cc71f536c" (UID: "77fac432-21bd-4251-bb24-320cc71f536c"). InnerVolumeSpecName "kube-api-access-984mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.010926 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77fac432-21bd-4251-bb24-320cc71f536c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "77fac432-21bd-4251-bb24-320cc71f536c" (UID: "77fac432-21bd-4251-bb24-320cc71f536c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.104076 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77fac432-21bd-4251-bb24-320cc71f536c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.104123 4802 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf4310b6-043e-47e5-8519-9a513fb8da48-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.104136 4802 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-client-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.104149 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-984mg\" (UniqueName: \"kubernetes.io/projected/77fac432-21bd-4251-bb24-320cc71f536c-kube-api-access-984mg\") on node \"crc\" DevicePath \"\"" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.104163 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.104176 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4310b6-043e-47e5-8519-9a513fb8da48-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.104188 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25q7d\" (UniqueName: \"kubernetes.io/projected/bf4310b6-043e-47e5-8519-9a513fb8da48-kube-api-access-25q7d\") on node \"crc\" DevicePath \"\"" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.104199 4802 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf4310b6-043e-47e5-8519-9a513fb8da48-client-ca\") on node \"crc\" DevicePath \"\"" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.104209 4802 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77fac432-21bd-4251-bb24-320cc71f536c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.376899 4802 generic.go:334] "Generic (PLEG): container finished" podID="77fac432-21bd-4251-bb24-320cc71f536c" containerID="1498717e194fa4bc4c97b8e71ee18089f7de50c9d749534f001b8903208acdb8" exitCode=0 Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.377065 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.378048 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" event={"ID":"77fac432-21bd-4251-bb24-320cc71f536c","Type":"ContainerDied","Data":"1498717e194fa4bc4c97b8e71ee18089f7de50c9d749534f001b8903208acdb8"} Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.379192 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g7767" event={"ID":"77fac432-21bd-4251-bb24-320cc71f536c","Type":"ContainerDied","Data":"f4bd211a9428f906b337dd8269bf7c325e769ee1d2d51df7ce33ecb173b56f7f"} Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.379248 4802 scope.go:117] "RemoveContainer" containerID="1498717e194fa4bc4c97b8e71ee18089f7de50c9d749534f001b8903208acdb8" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.383912 4802 generic.go:334] "Generic (PLEG): container finished" podID="bf4310b6-043e-47e5-8519-9a513fb8da48" containerID="f5100df8680cec1897cb6111d6fc2270f7245f73d6b7fdf7f51e567d8b954515" exitCode=0 Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.384012 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" event={"ID":"bf4310b6-043e-47e5-8519-9a513fb8da48","Type":"ContainerDied","Data":"f5100df8680cec1897cb6111d6fc2270f7245f73d6b7fdf7f51e567d8b954515"} Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.384051 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" event={"ID":"bf4310b6-043e-47e5-8519-9a513fb8da48","Type":"ContainerDied","Data":"eb0760814014619d04efbff99031f9fec20c5878499cb6b84921937c76542282"} Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.384098 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.405199 4802 scope.go:117] "RemoveContainer" containerID="1498717e194fa4bc4c97b8e71ee18089f7de50c9d749534f001b8903208acdb8" Oct 04 04:58:36 crc kubenswrapper[4802]: E1004 04:58:36.405736 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1498717e194fa4bc4c97b8e71ee18089f7de50c9d749534f001b8903208acdb8\": container with ID starting with 1498717e194fa4bc4c97b8e71ee18089f7de50c9d749534f001b8903208acdb8 not found: ID does not exist" containerID="1498717e194fa4bc4c97b8e71ee18089f7de50c9d749534f001b8903208acdb8" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.405788 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1498717e194fa4bc4c97b8e71ee18089f7de50c9d749534f001b8903208acdb8"} err="failed to get container status \"1498717e194fa4bc4c97b8e71ee18089f7de50c9d749534f001b8903208acdb8\": rpc error: code = NotFound desc = could not find container \"1498717e194fa4bc4c97b8e71ee18089f7de50c9d749534f001b8903208acdb8\": container with ID starting with 1498717e194fa4bc4c97b8e71ee18089f7de50c9d749534f001b8903208acdb8 not found: ID does not exist" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.405821 4802 scope.go:117] "RemoveContainer" containerID="f5100df8680cec1897cb6111d6fc2270f7245f73d6b7fdf7f51e567d8b954515" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.444764 4802 scope.go:117] "RemoveContainer" containerID="f5100df8680cec1897cb6111d6fc2270f7245f73d6b7fdf7f51e567d8b954515" Oct 04 04:58:36 crc kubenswrapper[4802]: E1004 04:58:36.445348 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5100df8680cec1897cb6111d6fc2270f7245f73d6b7fdf7f51e567d8b954515\": container with ID starting with f5100df8680cec1897cb6111d6fc2270f7245f73d6b7fdf7f51e567d8b954515 not found: ID does not exist" containerID="f5100df8680cec1897cb6111d6fc2270f7245f73d6b7fdf7f51e567d8b954515" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.445390 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5100df8680cec1897cb6111d6fc2270f7245f73d6b7fdf7f51e567d8b954515"} err="failed to get container status \"f5100df8680cec1897cb6111d6fc2270f7245f73d6b7fdf7f51e567d8b954515\": rpc error: code = NotFound desc = could not find container \"f5100df8680cec1897cb6111d6fc2270f7245f73d6b7fdf7f51e567d8b954515\": container with ID starting with f5100df8680cec1897cb6111d6fc2270f7245f73d6b7fdf7f51e567d8b954515 not found: ID does not exist" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.455615 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx"] Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.460165 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bw7xx"] Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.463961 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g7767"] Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.469234 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g7767"] Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.854139 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54d4894548-wr465"] Oct 04 04:58:36 crc kubenswrapper[4802]: E1004 04:58:36.854442 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4310b6-043e-47e5-8519-9a513fb8da48" containerName="route-controller-manager" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.854454 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4310b6-043e-47e5-8519-9a513fb8da48" containerName="route-controller-manager" Oct 04 04:58:36 crc kubenswrapper[4802]: E1004 04:58:36.854472 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049a575e-6351-4aa3-89b0-395dd5dc7af5" containerName="registry" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.854478 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="049a575e-6351-4aa3-89b0-395dd5dc7af5" containerName="registry" Oct 04 04:58:36 crc kubenswrapper[4802]: E1004 04:58:36.854486 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77fac432-21bd-4251-bb24-320cc71f536c" containerName="controller-manager" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.854493 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="77fac432-21bd-4251-bb24-320cc71f536c" containerName="controller-manager" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.854576 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf4310b6-043e-47e5-8519-9a513fb8da48" containerName="route-controller-manager" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.854587 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="049a575e-6351-4aa3-89b0-395dd5dc7af5" containerName="registry" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.854598 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="77fac432-21bd-4251-bb24-320cc71f536c" containerName="controller-manager" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.855067 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.856935 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr"] Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.857937 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.858536 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.858785 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.859030 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.859248 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.859736 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.865343 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr"] Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.865868 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.866095 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.866257 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.869186 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54d4894548-wr465"] Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.870094 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.870443 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.870750 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.871059 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 04 04:58:36 crc kubenswrapper[4802]: I1004 04:58:36.871676 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.024333 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-proxy-ca-bundles\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.024397 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfqm4\" (UniqueName: \"kubernetes.io/projected/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-kube-api-access-lfqm4\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.024431 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-client-ca\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.024449 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-serving-cert\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.024568 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1068331-11f2-4b71-92e5-63820bc9e767-serving-cert\") pod \"route-controller-manager-688d4f5ff8-4kgcr\" (UID: \"f1068331-11f2-4b71-92e5-63820bc9e767\") " pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.024737 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1068331-11f2-4b71-92e5-63820bc9e767-client-ca\") pod \"route-controller-manager-688d4f5ff8-4kgcr\" (UID: \"f1068331-11f2-4b71-92e5-63820bc9e767\") " pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.024786 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmnbg\" (UniqueName: \"kubernetes.io/projected/f1068331-11f2-4b71-92e5-63820bc9e767-kube-api-access-bmnbg\") pod \"route-controller-manager-688d4f5ff8-4kgcr\" (UID: \"f1068331-11f2-4b71-92e5-63820bc9e767\") " pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.024832 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1068331-11f2-4b71-92e5-63820bc9e767-config\") pod \"route-controller-manager-688d4f5ff8-4kgcr\" (UID: \"f1068331-11f2-4b71-92e5-63820bc9e767\") " pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.025157 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-config\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.126734 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfqm4\" (UniqueName: \"kubernetes.io/projected/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-kube-api-access-lfqm4\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.126796 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-client-ca\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.126817 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-serving-cert\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.126844 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1068331-11f2-4b71-92e5-63820bc9e767-serving-cert\") pod \"route-controller-manager-688d4f5ff8-4kgcr\" (UID: \"f1068331-11f2-4b71-92e5-63820bc9e767\") " pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.126872 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1068331-11f2-4b71-92e5-63820bc9e767-client-ca\") pod \"route-controller-manager-688d4f5ff8-4kgcr\" (UID: \"f1068331-11f2-4b71-92e5-63820bc9e767\") " pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.126888 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmnbg\" (UniqueName: \"kubernetes.io/projected/f1068331-11f2-4b71-92e5-63820bc9e767-kube-api-access-bmnbg\") pod \"route-controller-manager-688d4f5ff8-4kgcr\" (UID: \"f1068331-11f2-4b71-92e5-63820bc9e767\") " pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.126910 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1068331-11f2-4b71-92e5-63820bc9e767-config\") pod \"route-controller-manager-688d4f5ff8-4kgcr\" (UID: \"f1068331-11f2-4b71-92e5-63820bc9e767\") " pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.126950 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-config\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.126981 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-proxy-ca-bundles\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.128519 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1068331-11f2-4b71-92e5-63820bc9e767-client-ca\") pod \"route-controller-manager-688d4f5ff8-4kgcr\" (UID: \"f1068331-11f2-4b71-92e5-63820bc9e767\") " pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.128546 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-proxy-ca-bundles\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.128580 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-client-ca\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.129854 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1068331-11f2-4b71-92e5-63820bc9e767-config\") pod \"route-controller-manager-688d4f5ff8-4kgcr\" (UID: \"f1068331-11f2-4b71-92e5-63820bc9e767\") " pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.130485 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-config\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.132879 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-serving-cert\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.134727 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1068331-11f2-4b71-92e5-63820bc9e767-serving-cert\") pod \"route-controller-manager-688d4f5ff8-4kgcr\" (UID: \"f1068331-11f2-4b71-92e5-63820bc9e767\") " pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.144039 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfqm4\" (UniqueName: \"kubernetes.io/projected/da47dfd4-d30f-4b2b-b8df-fd7bd985ce89-kube-api-access-lfqm4\") pod \"controller-manager-54d4894548-wr465\" (UID: \"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89\") " pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.146610 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmnbg\" (UniqueName: \"kubernetes.io/projected/f1068331-11f2-4b71-92e5-63820bc9e767-kube-api-access-bmnbg\") pod \"route-controller-manager-688d4f5ff8-4kgcr\" (UID: \"f1068331-11f2-4b71-92e5-63820bc9e767\") " pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.183372 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.195410 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.392933 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54d4894548-wr465"] Oct 04 04:58:37 crc kubenswrapper[4802]: I1004 04:58:37.419871 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr"] Oct 04 04:58:37 crc kubenswrapper[4802]: W1004 04:58:37.427162 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1068331_11f2_4b71_92e5_63820bc9e767.slice/crio-a2aeb8c93b6ad32b8a9f6d30d2d187466866dacac700650c3da49f24ed2fc7e0 WatchSource:0}: Error finding container a2aeb8c93b6ad32b8a9f6d30d2d187466866dacac700650c3da49f24ed2fc7e0: Status 404 returned error can't find the container with id a2aeb8c93b6ad32b8a9f6d30d2d187466866dacac700650c3da49f24ed2fc7e0 Oct 04 04:58:38 crc kubenswrapper[4802]: I1004 04:58:38.373070 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77fac432-21bd-4251-bb24-320cc71f536c" path="/var/lib/kubelet/pods/77fac432-21bd-4251-bb24-320cc71f536c/volumes" Oct 04 04:58:38 crc kubenswrapper[4802]: I1004 04:58:38.375270 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf4310b6-043e-47e5-8519-9a513fb8da48" path="/var/lib/kubelet/pods/bf4310b6-043e-47e5-8519-9a513fb8da48/volumes" Oct 04 04:58:38 crc kubenswrapper[4802]: I1004 04:58:38.402728 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" event={"ID":"f1068331-11f2-4b71-92e5-63820bc9e767","Type":"ContainerStarted","Data":"a7c46aa8b590351b39a3316d761f6e11e44d97f04235010e041f4acc06ac51e6"} Oct 04 04:58:38 crc kubenswrapper[4802]: I1004 04:58:38.402779 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" event={"ID":"f1068331-11f2-4b71-92e5-63820bc9e767","Type":"ContainerStarted","Data":"a2aeb8c93b6ad32b8a9f6d30d2d187466866dacac700650c3da49f24ed2fc7e0"} Oct 04 04:58:38 crc kubenswrapper[4802]: I1004 04:58:38.402944 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:38 crc kubenswrapper[4802]: I1004 04:58:38.416373 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" Oct 04 04:58:38 crc kubenswrapper[4802]: I1004 04:58:38.424429 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54d4894548-wr465" event={"ID":"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89","Type":"ContainerStarted","Data":"62d78f1c4814e9e96b8f6b4cbcd6d5a9f8e01d685f3efd8b13bad37852518e96"} Oct 04 04:58:38 crc kubenswrapper[4802]: I1004 04:58:38.424488 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54d4894548-wr465" event={"ID":"da47dfd4-d30f-4b2b-b8df-fd7bd985ce89","Type":"ContainerStarted","Data":"a081a5230e04e171627bfb06ee39649f6eb195eea9e79d5c5664741293fb8ae0"} Oct 04 04:58:38 crc kubenswrapper[4802]: I1004 04:58:38.426239 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:38 crc kubenswrapper[4802]: I1004 04:58:38.433218 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-688d4f5ff8-4kgcr" podStartSLOduration=3.433193692 podStartE2EDuration="3.433193692s" podCreationTimestamp="2025-10-04 04:58:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:58:38.431251597 +0000 UTC m=+760.839252232" watchObservedRunningTime="2025-10-04 04:58:38.433193692 +0000 UTC m=+760.841194317" Oct 04 04:58:38 crc kubenswrapper[4802]: I1004 04:58:38.437545 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54d4894548-wr465" Oct 04 04:58:38 crc kubenswrapper[4802]: I1004 04:58:38.457018 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54d4894548-wr465" podStartSLOduration=3.457001023 podStartE2EDuration="3.457001023s" podCreationTimestamp="2025-10-04 04:58:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:58:38.453280618 +0000 UTC m=+760.861281243" watchObservedRunningTime="2025-10-04 04:58:38.457001023 +0000 UTC m=+760.865001648" Oct 04 04:58:42 crc kubenswrapper[4802]: I1004 04:58:42.173324 4802 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.207953 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qvsrv"] Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.209227 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-qvsrv" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.215070 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-btfxr"] Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.215829 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-btfxr" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.216406 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.216497 4802 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-gcc8h" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.216553 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.218261 4802 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-r5lp6" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.223393 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qvsrv"] Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.255617 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-btfxr"] Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.276088 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-4zrhg"] Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.277323 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-4zrhg" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.279508 4802 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-l68ds" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.282687 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-4zrhg"] Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.362507 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jhfn\" (UniqueName: \"kubernetes.io/projected/2cc33a76-77de-4149-a501-28de25d1b772-kube-api-access-9jhfn\") pod \"cert-manager-cainjector-7f985d654d-qvsrv\" (UID: \"2cc33a76-77de-4149-a501-28de25d1b772\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qvsrv" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.362597 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dc68\" (UniqueName: \"kubernetes.io/projected/61f7ab8b-a895-4573-924e-2fbc1fd17e84-kube-api-access-2dc68\") pod \"cert-manager-5b446d88c5-btfxr\" (UID: \"61f7ab8b-a895-4573-924e-2fbc1fd17e84\") " pod="cert-manager/cert-manager-5b446d88c5-btfxr" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.464075 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dc68\" (UniqueName: \"kubernetes.io/projected/61f7ab8b-a895-4573-924e-2fbc1fd17e84-kube-api-access-2dc68\") pod \"cert-manager-5b446d88c5-btfxr\" (UID: \"61f7ab8b-a895-4573-924e-2fbc1fd17e84\") " pod="cert-manager/cert-manager-5b446d88c5-btfxr" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.464160 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jhfn\" (UniqueName: \"kubernetes.io/projected/2cc33a76-77de-4149-a501-28de25d1b772-kube-api-access-9jhfn\") pod \"cert-manager-cainjector-7f985d654d-qvsrv\" (UID: \"2cc33a76-77de-4149-a501-28de25d1b772\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qvsrv" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.464195 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h2w9\" (UniqueName: \"kubernetes.io/projected/618bdd8f-d326-4da6-a8ee-b8aee0f1e09f-kube-api-access-6h2w9\") pod \"cert-manager-webhook-5655c58dd6-4zrhg\" (UID: \"618bdd8f-d326-4da6-a8ee-b8aee0f1e09f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-4zrhg" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.485250 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dc68\" (UniqueName: \"kubernetes.io/projected/61f7ab8b-a895-4573-924e-2fbc1fd17e84-kube-api-access-2dc68\") pod \"cert-manager-5b446d88c5-btfxr\" (UID: \"61f7ab8b-a895-4573-924e-2fbc1fd17e84\") " pod="cert-manager/cert-manager-5b446d88c5-btfxr" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.486371 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jhfn\" (UniqueName: \"kubernetes.io/projected/2cc33a76-77de-4149-a501-28de25d1b772-kube-api-access-9jhfn\") pod \"cert-manager-cainjector-7f985d654d-qvsrv\" (UID: \"2cc33a76-77de-4149-a501-28de25d1b772\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qvsrv" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.565058 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h2w9\" (UniqueName: \"kubernetes.io/projected/618bdd8f-d326-4da6-a8ee-b8aee0f1e09f-kube-api-access-6h2w9\") pod \"cert-manager-webhook-5655c58dd6-4zrhg\" (UID: \"618bdd8f-d326-4da6-a8ee-b8aee0f1e09f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-4zrhg" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.575497 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-qvsrv" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.576385 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-btfxr" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.582901 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h2w9\" (UniqueName: \"kubernetes.io/projected/618bdd8f-d326-4da6-a8ee-b8aee0f1e09f-kube-api-access-6h2w9\") pod \"cert-manager-webhook-5655c58dd6-4zrhg\" (UID: \"618bdd8f-d326-4da6-a8ee-b8aee0f1e09f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-4zrhg" Oct 04 04:58:53 crc kubenswrapper[4802]: I1004 04:58:53.595533 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-4zrhg" Oct 04 04:58:54 crc kubenswrapper[4802]: I1004 04:58:54.024179 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-btfxr"] Oct 04 04:58:54 crc kubenswrapper[4802]: I1004 04:58:54.032460 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 04:58:54 crc kubenswrapper[4802]: I1004 04:58:54.144972 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qvsrv"] Oct 04 04:58:54 crc kubenswrapper[4802]: W1004 04:58:54.151272 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc33a76_77de_4149_a501_28de25d1b772.slice/crio-beb3d93bac73b4a076a3ba6429090a7e674fa4769e2347ec920cc4cc8ac024ec WatchSource:0}: Error finding container beb3d93bac73b4a076a3ba6429090a7e674fa4769e2347ec920cc4cc8ac024ec: Status 404 returned error can't find the container with id beb3d93bac73b4a076a3ba6429090a7e674fa4769e2347ec920cc4cc8ac024ec Oct 04 04:58:54 crc kubenswrapper[4802]: I1004 04:58:54.162932 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-4zrhg"] Oct 04 04:58:54 crc kubenswrapper[4802]: W1004 04:58:54.168614 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod618bdd8f_d326_4da6_a8ee_b8aee0f1e09f.slice/crio-015ed470fad28943e9521075ea1cee57bba4cf78a05c836d7ed1f2a9d940bede WatchSource:0}: Error finding container 015ed470fad28943e9521075ea1cee57bba4cf78a05c836d7ed1f2a9d940bede: Status 404 returned error can't find the container with id 015ed470fad28943e9521075ea1cee57bba4cf78a05c836d7ed1f2a9d940bede Oct 04 04:58:54 crc kubenswrapper[4802]: I1004 04:58:54.525447 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-qvsrv" event={"ID":"2cc33a76-77de-4149-a501-28de25d1b772","Type":"ContainerStarted","Data":"beb3d93bac73b4a076a3ba6429090a7e674fa4769e2347ec920cc4cc8ac024ec"} Oct 04 04:58:54 crc kubenswrapper[4802]: I1004 04:58:54.531043 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-4zrhg" event={"ID":"618bdd8f-d326-4da6-a8ee-b8aee0f1e09f","Type":"ContainerStarted","Data":"015ed470fad28943e9521075ea1cee57bba4cf78a05c836d7ed1f2a9d940bede"} Oct 04 04:58:54 crc kubenswrapper[4802]: I1004 04:58:54.533261 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-btfxr" event={"ID":"61f7ab8b-a895-4573-924e-2fbc1fd17e84","Type":"ContainerStarted","Data":"72a09e31815725ecf4985cbaaa19725c83cfddafb10fe1b8ab38f2f07737fe76"} Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.213980 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bw8lw"] Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.214952 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovn-controller" containerID="cri-o://bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b" gracePeriod=30 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.214982 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="nbdb" containerID="cri-o://a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff" gracePeriod=30 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.215088 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="sbdb" containerID="cri-o://89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad" gracePeriod=30 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.215296 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovn-acl-logging" containerID="cri-o://52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774" gracePeriod=30 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.215344 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b" gracePeriod=30 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.215340 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="northd" containerID="cri-o://0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78" gracePeriod=30 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.217383 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="kube-rbac-proxy-node" containerID="cri-o://5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe" gracePeriod=30 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.260092 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" containerID="cri-o://4f0d732429f19770b0817692452930258fb7fe8a6f169bffd3e2405933193dab" gracePeriod=30 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.586235 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovnkube-controller/3.log" Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.590432 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovn-acl-logging/0.log" Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591023 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovn-controller/0.log" Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591438 4802 generic.go:334] "Generic (PLEG): container finished" podID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerID="4f0d732429f19770b0817692452930258fb7fe8a6f169bffd3e2405933193dab" exitCode=0 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591473 4802 generic.go:334] "Generic (PLEG): container finished" podID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerID="89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad" exitCode=0 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591488 4802 generic.go:334] "Generic (PLEG): container finished" podID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerID="a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff" exitCode=0 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591498 4802 generic.go:334] "Generic (PLEG): container finished" podID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerID="0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78" exitCode=0 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591508 4802 generic.go:334] "Generic (PLEG): container finished" podID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerID="af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b" exitCode=0 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591518 4802 generic.go:334] "Generic (PLEG): container finished" podID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerID="5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe" exitCode=0 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591525 4802 generic.go:334] "Generic (PLEG): container finished" podID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerID="52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774" exitCode=143 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591521 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"4f0d732429f19770b0817692452930258fb7fe8a6f169bffd3e2405933193dab"} Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591575 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad"} Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591589 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff"} Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591600 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78"} Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591609 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b"} Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591619 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe"} Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591628 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774"} Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591637 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b"} Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591535 4802 generic.go:334] "Generic (PLEG): container finished" podID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerID="bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b" exitCode=143 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.591664 4802 scope.go:117] "RemoveContainer" containerID="29cefc9ca4a732572a12c60ebd7db460bb291126bda5bd4a4d37a87c60f91d5c" Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.593520 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jpj5_c1c56664-b32b-475a-89eb-55910da58338/kube-multus/2.log" Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.593866 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jpj5_c1c56664-b32b-475a-89eb-55910da58338/kube-multus/1.log" Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.593902 4802 generic.go:334] "Generic (PLEG): container finished" podID="c1c56664-b32b-475a-89eb-55910da58338" containerID="6e875be304c527cc69671b43e27d94fc9ad734c107c7e13038c19544996014da" exitCode=2 Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.593925 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jpj5" event={"ID":"c1c56664-b32b-475a-89eb-55910da58338","Type":"ContainerDied","Data":"6e875be304c527cc69671b43e27d94fc9ad734c107c7e13038c19544996014da"} Oct 04 04:59:03 crc kubenswrapper[4802]: I1004 04:59:03.594601 4802 scope.go:117] "RemoveContainer" containerID="6e875be304c527cc69671b43e27d94fc9ad734c107c7e13038c19544996014da" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.477865 4802 scope.go:117] "RemoveContainer" containerID="703d0cda61f01b5ea4b0c1fa1dd68ddcd65f8bc00b22022bbb3a8c77a11425a2" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.552851 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovn-acl-logging/0.log" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.554158 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovn-controller/0.log" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.554673 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.614253 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovn-acl-logging/0.log" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.615003 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bw8lw_11ac83cd-2981-4717-8cb4-2ca3e302461a/ovn-controller/0.log" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.615464 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" event={"ID":"11ac83cd-2981-4717-8cb4-2ca3e302461a","Type":"ContainerDied","Data":"317a0770f7ef4c98f8dabe125db8cd4fe6017e4850514539da1aa9565562d5fc"} Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.615584 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bw8lw" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.623728 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2rft8"] Oct 04 04:59:04 crc kubenswrapper[4802]: E1004 04:59:04.623990 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="kubecfg-setup" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624005 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="kubecfg-setup" Oct 04 04:59:04 crc kubenswrapper[4802]: E1004 04:59:04.624015 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="northd" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624024 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="northd" Oct 04 04:59:04 crc kubenswrapper[4802]: E1004 04:59:04.624035 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624042 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: E1004 04:59:04.624050 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624057 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: E1004 04:59:04.624070 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovn-acl-logging" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624078 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovn-acl-logging" Oct 04 04:59:04 crc kubenswrapper[4802]: E1004 04:59:04.624085 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="kube-rbac-proxy-ovn-metrics" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624093 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="kube-rbac-proxy-ovn-metrics" Oct 04 04:59:04 crc kubenswrapper[4802]: E1004 04:59:04.624102 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="kube-rbac-proxy-node" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624110 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="kube-rbac-proxy-node" Oct 04 04:59:04 crc kubenswrapper[4802]: E1004 04:59:04.624126 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovn-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624134 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovn-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: E1004 04:59:04.624164 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="sbdb" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624172 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="sbdb" Oct 04 04:59:04 crc kubenswrapper[4802]: E1004 04:59:04.624183 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624190 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: E1004 04:59:04.624203 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="nbdb" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624210 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="nbdb" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624319 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="kube-rbac-proxy-ovn-metrics" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624332 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624341 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovn-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624353 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624362 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovn-acl-logging" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624372 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="nbdb" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624401 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624411 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="northd" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624420 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="sbdb" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624431 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="kube-rbac-proxy-node" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624442 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: E1004 04:59:04.624551 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624561 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624700 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: E1004 04:59:04.624811 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.624821 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" containerName="ovnkube-controller" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.626360 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.752922 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovnkube-config\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.752976 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-node-log\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753003 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-kubelet\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753018 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-systemd\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753036 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-ovn\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753066 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-var-lib-openvswitch\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753105 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-openvswitch\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753124 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-cni-netd\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753144 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-run-netns\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753160 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-env-overrides\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753184 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-systemd-units\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753209 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovnkube-script-lib\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753228 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-slash\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753245 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6czkw\" (UniqueName: \"kubernetes.io/projected/11ac83cd-2981-4717-8cb4-2ca3e302461a-kube-api-access-6czkw\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753267 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753297 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-cni-bin\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753316 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovn-node-metrics-cert\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753333 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-run-ovn-kubernetes\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753354 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-etc-openvswitch\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753372 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-log-socket\") pod \"11ac83cd-2981-4717-8cb4-2ca3e302461a\" (UID: \"11ac83cd-2981-4717-8cb4-2ca3e302461a\") " Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753521 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-etc-openvswitch\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753545 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-cni-netd\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753547 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753582 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-systemd-units\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753687 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-node-log" (OuterVolumeSpecName: "node-log") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753710 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753707 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-run-openvswitch\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753763 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b98f891b-b088-4d3b-81f6-1fd09fa22b23-ovn-node-metrics-cert\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753774 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-slash" (OuterVolumeSpecName: "host-slash") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753881 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-kubelet\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753880 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753906 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-run-netns\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753903 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753934 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-var-lib-openvswitch\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753959 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753933 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.753978 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754004 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754026 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754034 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-run-ovn-kubernetes\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754076 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-run-systemd\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754119 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-run-ovn\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754139 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-log-socket\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754162 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b98f891b-b088-4d3b-81f6-1fd09fa22b23-env-overrides\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754178 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b98f891b-b088-4d3b-81f6-1fd09fa22b23-ovnkube-script-lib\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754159 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-log-socket" (OuterVolumeSpecName: "log-socket") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754198 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754248 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754377 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-slash\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754424 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754463 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b98f891b-b088-4d3b-81f6-1fd09fa22b23-ovnkube-config\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754493 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8g9x\" (UniqueName: \"kubernetes.io/projected/b98f891b-b088-4d3b-81f6-1fd09fa22b23-kube-api-access-x8g9x\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754521 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-node-log\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754549 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-cni-bin\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754603 4802 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754618 4802 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-log-socket\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754630 4802 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754655 4802 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-node-log\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754666 4802 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754676 4802 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754687 4802 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754677 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754695 4802 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754742 4802 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754758 4802 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754772 4802 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-slash\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754789 4802 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754806 4802 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754814 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.754959 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.759857 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.759971 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ac83cd-2981-4717-8cb4-2ca3e302461a-kube-api-access-6czkw" (OuterVolumeSpecName: "kube-api-access-6czkw") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "kube-api-access-6czkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.780076 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "11ac83cd-2981-4717-8cb4-2ca3e302461a" (UID: "11ac83cd-2981-4717-8cb4-2ca3e302461a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.855751 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b98f891b-b088-4d3b-81f6-1fd09fa22b23-ovnkube-config\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.855805 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8g9x\" (UniqueName: \"kubernetes.io/projected/b98f891b-b088-4d3b-81f6-1fd09fa22b23-kube-api-access-x8g9x\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.855830 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-node-log\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.855863 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-cni-bin\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.855905 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-etc-openvswitch\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.855927 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-cni-netd\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.855965 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b98f891b-b088-4d3b-81f6-1fd09fa22b23-ovn-node-metrics-cert\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.855985 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-systemd-units\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.856004 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-run-openvswitch\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.856032 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-run-netns\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.856033 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-node-log\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.856098 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-systemd-units\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.856033 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-etc-openvswitch\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.856087 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-kubelet\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.856109 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-run-netns\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.856110 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-run-openvswitch\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.856106 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-cni-netd\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.856055 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-kubelet\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.856697 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b98f891b-b088-4d3b-81f6-1fd09fa22b23-ovnkube-config\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.856805 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-cni-bin\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858281 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-var-lib-openvswitch\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858249 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-var-lib-openvswitch\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858340 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-run-ovn-kubernetes\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858374 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-run-systemd\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858428 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-run-ovn\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858457 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-log-socket\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858461 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-run-systemd\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858428 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-run-ovn-kubernetes\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858493 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858499 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-run-ovn\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858518 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b98f891b-b088-4d3b-81f6-1fd09fa22b23-env-overrides\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858541 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858544 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b98f891b-b088-4d3b-81f6-1fd09fa22b23-ovnkube-script-lib\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858514 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-log-socket\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858752 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-slash\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858753 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b98f891b-b088-4d3b-81f6-1fd09fa22b23-host-slash\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858891 4802 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858903 4802 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858913 4802 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858923 4802 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858933 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6czkw\" (UniqueName: \"kubernetes.io/projected/11ac83cd-2981-4717-8cb4-2ca3e302461a-kube-api-access-6czkw\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858946 4802 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11ac83cd-2981-4717-8cb4-2ca3e302461a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.858957 4802 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11ac83cd-2981-4717-8cb4-2ca3e302461a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.859079 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b98f891b-b088-4d3b-81f6-1fd09fa22b23-env-overrides\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.859454 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b98f891b-b088-4d3b-81f6-1fd09fa22b23-ovnkube-script-lib\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.860452 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b98f891b-b088-4d3b-81f6-1fd09fa22b23-ovn-node-metrics-cert\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.873907 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8g9x\" (UniqueName: \"kubernetes.io/projected/b98f891b-b088-4d3b-81f6-1fd09fa22b23-kube-api-access-x8g9x\") pod \"ovnkube-node-2rft8\" (UID: \"b98f891b-b088-4d3b-81f6-1fd09fa22b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.942806 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.967505 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bw8lw"] Oct 04 04:59:04 crc kubenswrapper[4802]: I1004 04:59:04.972658 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bw8lw"] Oct 04 04:59:05 crc kubenswrapper[4802]: I1004 04:59:05.286174 4802 scope.go:117] "RemoveContainer" containerID="4f0d732429f19770b0817692452930258fb7fe8a6f169bffd3e2405933193dab" Oct 04 04:59:05 crc kubenswrapper[4802]: W1004 04:59:05.308925 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb98f891b_b088_4d3b_81f6_1fd09fa22b23.slice/crio-be8fa04b9f9e7343ce6f60e30c3f929110bb3ecd9c35d8c20a9b7b70ffafbf3d WatchSource:0}: Error finding container be8fa04b9f9e7343ce6f60e30c3f929110bb3ecd9c35d8c20a9b7b70ffafbf3d: Status 404 returned error can't find the container with id be8fa04b9f9e7343ce6f60e30c3f929110bb3ecd9c35d8c20a9b7b70ffafbf3d Oct 04 04:59:05 crc kubenswrapper[4802]: I1004 04:59:05.347598 4802 scope.go:117] "RemoveContainer" containerID="89c54eb2437598a5b41406046804d6926b45ece7d6825bb121ef2cbb956842ad" Oct 04 04:59:05 crc kubenswrapper[4802]: I1004 04:59:05.394089 4802 scope.go:117] "RemoveContainer" containerID="a429161b5ca02bde00e90f069400f765cf03f963c18b41c1371dc97cae4e0bff" Oct 04 04:59:05 crc kubenswrapper[4802]: I1004 04:59:05.426082 4802 scope.go:117] "RemoveContainer" containerID="0e84bd5e99bb2ccc6dd403d34add06248d81b7697d8ad0cd6e12e1990d656f78" Oct 04 04:59:05 crc kubenswrapper[4802]: I1004 04:59:05.453391 4802 scope.go:117] "RemoveContainer" containerID="af74e0abd8223e122a504fe7df7db398cf16798b588f1fa417c7b1a98becd12b" Oct 04 04:59:05 crc kubenswrapper[4802]: I1004 04:59:05.471405 4802 scope.go:117] "RemoveContainer" containerID="5bf7070824e23cb1a43c1ae64f9b21028f24d7f533382d902c642b1cf262dcbe" Oct 04 04:59:05 crc kubenswrapper[4802]: I1004 04:59:05.489785 4802 scope.go:117] "RemoveContainer" containerID="52197ce6395549a331e17575c31fd0ba092ebbbd71fd041c5da50c6199e82774" Oct 04 04:59:05 crc kubenswrapper[4802]: I1004 04:59:05.505410 4802 scope.go:117] "RemoveContainer" containerID="bda72201e4cc3ffe8b3afaa4391f980ff3c0592b7aa78ec3f940a3479b48af8b" Oct 04 04:59:05 crc kubenswrapper[4802]: I1004 04:59:05.517464 4802 scope.go:117] "RemoveContainer" containerID="65ee7610304688db8eccde9e2214f9f0d00ae3f057fca4a56e0967e45dd4db37" Oct 04 04:59:05 crc kubenswrapper[4802]: I1004 04:59:05.622949 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" event={"ID":"b98f891b-b088-4d3b-81f6-1fd09fa22b23","Type":"ContainerStarted","Data":"603a15918457953011a996c08d6bf9ba99895cf41d00efda8a9d8c62e8656853"} Oct 04 04:59:05 crc kubenswrapper[4802]: I1004 04:59:05.623000 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" event={"ID":"b98f891b-b088-4d3b-81f6-1fd09fa22b23","Type":"ContainerStarted","Data":"be8fa04b9f9e7343ce6f60e30c3f929110bb3ecd9c35d8c20a9b7b70ffafbf3d"} Oct 04 04:59:05 crc kubenswrapper[4802]: I1004 04:59:05.627123 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jpj5_c1c56664-b32b-475a-89eb-55910da58338/kube-multus/2.log" Oct 04 04:59:05 crc kubenswrapper[4802]: I1004 04:59:05.627208 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jpj5" event={"ID":"c1c56664-b32b-475a-89eb-55910da58338","Type":"ContainerStarted","Data":"4bdfd927e8548a0de1d17ae48edb5888e550d3be6347a3713b892343dbec030e"} Oct 04 04:59:06 crc kubenswrapper[4802]: I1004 04:59:06.368390 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11ac83cd-2981-4717-8cb4-2ca3e302461a" path="/var/lib/kubelet/pods/11ac83cd-2981-4717-8cb4-2ca3e302461a/volumes" Oct 04 04:59:06 crc kubenswrapper[4802]: I1004 04:59:06.650106 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-qvsrv" event={"ID":"2cc33a76-77de-4149-a501-28de25d1b772","Type":"ContainerStarted","Data":"fc0e235455eb43fe884e2592f11b1d7fb2dea7459311b5c52719658bdbe48406"} Oct 04 04:59:06 crc kubenswrapper[4802]: I1004 04:59:06.655973 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-4zrhg" event={"ID":"618bdd8f-d326-4da6-a8ee-b8aee0f1e09f","Type":"ContainerStarted","Data":"f2c51cfd5d67fe4bafc18796356d0abe03088cb2544690a6e1be1e852c3a2613"} Oct 04 04:59:06 crc kubenswrapper[4802]: I1004 04:59:06.656770 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-4zrhg" Oct 04 04:59:06 crc kubenswrapper[4802]: I1004 04:59:06.658576 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-btfxr" event={"ID":"61f7ab8b-a895-4573-924e-2fbc1fd17e84","Type":"ContainerStarted","Data":"69642647b0d397e6277e034cdad1c521c808a633f987739edd5e711d18446cb9"} Oct 04 04:59:06 crc kubenswrapper[4802]: I1004 04:59:06.660560 4802 generic.go:334] "Generic (PLEG): container finished" podID="b98f891b-b088-4d3b-81f6-1fd09fa22b23" containerID="603a15918457953011a996c08d6bf9ba99895cf41d00efda8a9d8c62e8656853" exitCode=0 Oct 04 04:59:06 crc kubenswrapper[4802]: I1004 04:59:06.660602 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" event={"ID":"b98f891b-b088-4d3b-81f6-1fd09fa22b23","Type":"ContainerDied","Data":"603a15918457953011a996c08d6bf9ba99895cf41d00efda8a9d8c62e8656853"} Oct 04 04:59:06 crc kubenswrapper[4802]: I1004 04:59:06.668980 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-qvsrv" podStartSLOduration=2.526848693 podStartE2EDuration="13.668963779s" podCreationTimestamp="2025-10-04 04:58:53 +0000 UTC" firstStartedPulling="2025-10-04 04:58:54.15380835 +0000 UTC m=+776.561808975" lastFinishedPulling="2025-10-04 04:59:05.295923436 +0000 UTC m=+787.703924061" observedRunningTime="2025-10-04 04:59:06.666004785 +0000 UTC m=+789.074005410" watchObservedRunningTime="2025-10-04 04:59:06.668963779 +0000 UTC m=+789.076964404" Oct 04 04:59:06 crc kubenswrapper[4802]: I1004 04:59:06.679276 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-4zrhg" podStartSLOduration=1.498516488 podStartE2EDuration="13.679262532s" podCreationTimestamp="2025-10-04 04:58:53 +0000 UTC" firstStartedPulling="2025-10-04 04:58:54.172058395 +0000 UTC m=+776.580059020" lastFinishedPulling="2025-10-04 04:59:06.352804439 +0000 UTC m=+788.760805064" observedRunningTime="2025-10-04 04:59:06.67812701 +0000 UTC m=+789.086127635" watchObservedRunningTime="2025-10-04 04:59:06.679262532 +0000 UTC m=+789.087263157" Oct 04 04:59:06 crc kubenswrapper[4802]: I1004 04:59:06.722023 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-btfxr" podStartSLOduration=2.454204415 podStartE2EDuration="13.722006878s" podCreationTimestamp="2025-10-04 04:58:53 +0000 UTC" firstStartedPulling="2025-10-04 04:58:54.032165618 +0000 UTC m=+776.440166243" lastFinishedPulling="2025-10-04 04:59:05.299968061 +0000 UTC m=+787.707968706" observedRunningTime="2025-10-04 04:59:06.72175132 +0000 UTC m=+789.129751975" watchObservedRunningTime="2025-10-04 04:59:06.722006878 +0000 UTC m=+789.130007493" Oct 04 04:59:07 crc kubenswrapper[4802]: I1004 04:59:07.672091 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" event={"ID":"b98f891b-b088-4d3b-81f6-1fd09fa22b23","Type":"ContainerStarted","Data":"8d029fe41089280141345c58491f1b2c86ce7fa344379ad23a4012cc91c2be98"} Oct 04 04:59:07 crc kubenswrapper[4802]: I1004 04:59:07.672523 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" event={"ID":"b98f891b-b088-4d3b-81f6-1fd09fa22b23","Type":"ContainerStarted","Data":"c15a3314a7d5964cbe5a978508083ab74f05f3b1be2f33398910d70bc939d776"} Oct 04 04:59:07 crc kubenswrapper[4802]: I1004 04:59:07.672539 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" event={"ID":"b98f891b-b088-4d3b-81f6-1fd09fa22b23","Type":"ContainerStarted","Data":"060eb1a81e7c0751081c9a04bc78f4c1373175fa161c24601a966be33bb622ca"} Oct 04 04:59:07 crc kubenswrapper[4802]: I1004 04:59:07.672550 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" event={"ID":"b98f891b-b088-4d3b-81f6-1fd09fa22b23","Type":"ContainerStarted","Data":"edd3dd307f4aaf110be2771ef8b89a7ae07bca17d714772ba2fe37f2ebf10626"} Oct 04 04:59:07 crc kubenswrapper[4802]: I1004 04:59:07.672563 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" event={"ID":"b98f891b-b088-4d3b-81f6-1fd09fa22b23","Type":"ContainerStarted","Data":"1e659f52a469df1fab1376c883e5898e7de2ecda29fa9dd424e91c6008d5cc81"} Oct 04 04:59:08 crc kubenswrapper[4802]: I1004 04:59:08.684869 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" event={"ID":"b98f891b-b088-4d3b-81f6-1fd09fa22b23","Type":"ContainerStarted","Data":"b9ffcd35bcac513305b7c6621e5feccdd1ca2fdd3a69ce17cd85b54273c89250"} Oct 04 04:59:10 crc kubenswrapper[4802]: I1004 04:59:10.699326 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" event={"ID":"b98f891b-b088-4d3b-81f6-1fd09fa22b23","Type":"ContainerStarted","Data":"3748fa2795a8061a891ef1a9c10264bb873b591936971a94db875d283377a721"} Oct 04 04:59:13 crc kubenswrapper[4802]: I1004 04:59:13.598501 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-4zrhg" Oct 04 04:59:13 crc kubenswrapper[4802]: I1004 04:59:13.720217 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" event={"ID":"b98f891b-b088-4d3b-81f6-1fd09fa22b23","Type":"ContainerStarted","Data":"453f2e363c66e9abd3b2a3d1195ddf22eb3cfd7e624831d595bafbc98568493c"} Oct 04 04:59:13 crc kubenswrapper[4802]: I1004 04:59:13.720586 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:13 crc kubenswrapper[4802]: I1004 04:59:13.750873 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" podStartSLOduration=9.750853131 podStartE2EDuration="9.750853131s" podCreationTimestamp="2025-10-04 04:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 04:59:13.747232739 +0000 UTC m=+796.155233384" watchObservedRunningTime="2025-10-04 04:59:13.750853131 +0000 UTC m=+796.158853756" Oct 04 04:59:13 crc kubenswrapper[4802]: I1004 04:59:13.753971 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:14 crc kubenswrapper[4802]: I1004 04:59:14.726246 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:14 crc kubenswrapper[4802]: I1004 04:59:14.726298 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:14 crc kubenswrapper[4802]: I1004 04:59:14.760081 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:34 crc kubenswrapper[4802]: I1004 04:59:34.969656 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2rft8" Oct 04 04:59:52 crc kubenswrapper[4802]: I1004 04:59:52.662474 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 04:59:52 crc kubenswrapper[4802]: I1004 04:59:52.663410 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.388111 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr"] Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.390253 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.393155 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.397238 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr"] Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.464961 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22cfg\" (UniqueName: \"kubernetes.io/projected/a7e85ec1-39eb-4446-955f-b3714e5308af-kube-api-access-22cfg\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr\" (UID: \"a7e85ec1-39eb-4446-955f-b3714e5308af\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.465083 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7e85ec1-39eb-4446-955f-b3714e5308af-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr\" (UID: \"a7e85ec1-39eb-4446-955f-b3714e5308af\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.465147 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7e85ec1-39eb-4446-955f-b3714e5308af-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr\" (UID: \"a7e85ec1-39eb-4446-955f-b3714e5308af\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.566718 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7e85ec1-39eb-4446-955f-b3714e5308af-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr\" (UID: \"a7e85ec1-39eb-4446-955f-b3714e5308af\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.566809 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22cfg\" (UniqueName: \"kubernetes.io/projected/a7e85ec1-39eb-4446-955f-b3714e5308af-kube-api-access-22cfg\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr\" (UID: \"a7e85ec1-39eb-4446-955f-b3714e5308af\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.566846 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7e85ec1-39eb-4446-955f-b3714e5308af-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr\" (UID: \"a7e85ec1-39eb-4446-955f-b3714e5308af\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.571267 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7e85ec1-39eb-4446-955f-b3714e5308af-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr\" (UID: \"a7e85ec1-39eb-4446-955f-b3714e5308af\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.571438 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7e85ec1-39eb-4446-955f-b3714e5308af-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr\" (UID: \"a7e85ec1-39eb-4446-955f-b3714e5308af\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.593351 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22cfg\" (UniqueName: \"kubernetes.io/projected/a7e85ec1-39eb-4446-955f-b3714e5308af-kube-api-access-22cfg\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr\" (UID: \"a7e85ec1-39eb-4446-955f-b3714e5308af\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.713887 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.916235 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr"] Oct 04 04:59:55 crc kubenswrapper[4802]: W1004 04:59:55.928209 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7e85ec1_39eb_4446_955f_b3714e5308af.slice/crio-076233d339806a2025912e83a1560a55cb1a09423b42fc65a713e9a0ec716a00 WatchSource:0}: Error finding container 076233d339806a2025912e83a1560a55cb1a09423b42fc65a713e9a0ec716a00: Status 404 returned error can't find the container with id 076233d339806a2025912e83a1560a55cb1a09423b42fc65a713e9a0ec716a00 Oct 04 04:59:55 crc kubenswrapper[4802]: I1004 04:59:55.979391 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" event={"ID":"a7e85ec1-39eb-4446-955f-b3714e5308af","Type":"ContainerStarted","Data":"076233d339806a2025912e83a1560a55cb1a09423b42fc65a713e9a0ec716a00"} Oct 04 04:59:56 crc kubenswrapper[4802]: I1004 04:59:56.987452 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" event={"ID":"a7e85ec1-39eb-4446-955f-b3714e5308af","Type":"ContainerStarted","Data":"3d839480f242d270771dd1dcb79deca34087f470ae9c04850dd70babbf463e59"} Oct 04 04:59:57 crc kubenswrapper[4802]: I1004 04:59:57.575192 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-95sfx"] Oct 04 04:59:57 crc kubenswrapper[4802]: I1004 04:59:57.576420 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 04:59:57 crc kubenswrapper[4802]: I1004 04:59:57.585163 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-95sfx"] Oct 04 04:59:57 crc kubenswrapper[4802]: I1004 04:59:57.697910 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-utilities\") pod \"redhat-operators-95sfx\" (UID: \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\") " pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 04:59:57 crc kubenswrapper[4802]: I1004 04:59:57.697978 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-catalog-content\") pod \"redhat-operators-95sfx\" (UID: \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\") " pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 04:59:57 crc kubenswrapper[4802]: I1004 04:59:57.698059 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2zjj\" (UniqueName: \"kubernetes.io/projected/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-kube-api-access-d2zjj\") pod \"redhat-operators-95sfx\" (UID: \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\") " pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 04:59:57 crc kubenswrapper[4802]: I1004 04:59:57.799778 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2zjj\" (UniqueName: \"kubernetes.io/projected/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-kube-api-access-d2zjj\") pod \"redhat-operators-95sfx\" (UID: \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\") " pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 04:59:57 crc kubenswrapper[4802]: I1004 04:59:57.799950 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-utilities\") pod \"redhat-operators-95sfx\" (UID: \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\") " pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 04:59:57 crc kubenswrapper[4802]: I1004 04:59:57.799983 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-catalog-content\") pod \"redhat-operators-95sfx\" (UID: \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\") " pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 04:59:57 crc kubenswrapper[4802]: I1004 04:59:57.800682 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-catalog-content\") pod \"redhat-operators-95sfx\" (UID: \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\") " pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 04:59:57 crc kubenswrapper[4802]: I1004 04:59:57.800827 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-utilities\") pod \"redhat-operators-95sfx\" (UID: \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\") " pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 04:59:57 crc kubenswrapper[4802]: I1004 04:59:57.824381 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2zjj\" (UniqueName: \"kubernetes.io/projected/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-kube-api-access-d2zjj\") pod \"redhat-operators-95sfx\" (UID: \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\") " pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 04:59:57 crc kubenswrapper[4802]: I1004 04:59:57.897073 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 04:59:58 crc kubenswrapper[4802]: I1004 04:59:58.005282 4802 generic.go:334] "Generic (PLEG): container finished" podID="a7e85ec1-39eb-4446-955f-b3714e5308af" containerID="3d839480f242d270771dd1dcb79deca34087f470ae9c04850dd70babbf463e59" exitCode=0 Oct 04 04:59:58 crc kubenswrapper[4802]: I1004 04:59:58.005494 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" event={"ID":"a7e85ec1-39eb-4446-955f-b3714e5308af","Type":"ContainerDied","Data":"3d839480f242d270771dd1dcb79deca34087f470ae9c04850dd70babbf463e59"} Oct 04 04:59:58 crc kubenswrapper[4802]: I1004 04:59:58.318600 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-95sfx"] Oct 04 04:59:58 crc kubenswrapper[4802]: W1004 04:59:58.325389 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa4d48d2_05fe_4cc7_b0a2_3b3f8a82b625.slice/crio-c90b726449440f2a37837f49d6ed5a708737287f039aff9e3300d65000cef117 WatchSource:0}: Error finding container c90b726449440f2a37837f49d6ed5a708737287f039aff9e3300d65000cef117: Status 404 returned error can't find the container with id c90b726449440f2a37837f49d6ed5a708737287f039aff9e3300d65000cef117 Oct 04 04:59:59 crc kubenswrapper[4802]: I1004 04:59:59.012023 4802 generic.go:334] "Generic (PLEG): container finished" podID="aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" containerID="b2d3d36fd333a39841da999bcd74a4da52880fe4ca09ebfec2b1a3cebdd89158" exitCode=0 Oct 04 04:59:59 crc kubenswrapper[4802]: I1004 04:59:59.012116 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95sfx" event={"ID":"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625","Type":"ContainerDied","Data":"b2d3d36fd333a39841da999bcd74a4da52880fe4ca09ebfec2b1a3cebdd89158"} Oct 04 04:59:59 crc kubenswrapper[4802]: I1004 04:59:59.012398 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95sfx" event={"ID":"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625","Type":"ContainerStarted","Data":"c90b726449440f2a37837f49d6ed5a708737287f039aff9e3300d65000cef117"} Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.023228 4802 generic.go:334] "Generic (PLEG): container finished" podID="a7e85ec1-39eb-4446-955f-b3714e5308af" containerID="d77bb6cab782071dd1bbae415bcf811052b87118371769ea04c89df1902f9b38" exitCode=0 Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.023321 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" event={"ID":"a7e85ec1-39eb-4446-955f-b3714e5308af","Type":"ContainerDied","Data":"d77bb6cab782071dd1bbae415bcf811052b87118371769ea04c89df1902f9b38"} Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.133267 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5"] Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.134050 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.136400 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.136444 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.144865 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5"] Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.237237 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-secret-volume\") pod \"collect-profiles-29325900-bc4s5\" (UID: \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.237316 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfv56\" (UniqueName: \"kubernetes.io/projected/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-kube-api-access-tfv56\") pod \"collect-profiles-29325900-bc4s5\" (UID: \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.237396 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-config-volume\") pod \"collect-profiles-29325900-bc4s5\" (UID: \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.338896 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-secret-volume\") pod \"collect-profiles-29325900-bc4s5\" (UID: \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.339352 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfv56\" (UniqueName: \"kubernetes.io/projected/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-kube-api-access-tfv56\") pod \"collect-profiles-29325900-bc4s5\" (UID: \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.339517 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-config-volume\") pod \"collect-profiles-29325900-bc4s5\" (UID: \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.340892 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-config-volume\") pod \"collect-profiles-29325900-bc4s5\" (UID: \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.345426 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-secret-volume\") pod \"collect-profiles-29325900-bc4s5\" (UID: \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.357795 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfv56\" (UniqueName: \"kubernetes.io/projected/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-kube-api-access-tfv56\") pod \"collect-profiles-29325900-bc4s5\" (UID: \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.454609 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" Oct 04 05:00:00 crc kubenswrapper[4802]: I1004 05:00:00.703597 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5"] Oct 04 05:00:00 crc kubenswrapper[4802]: W1004 05:00:00.710614 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda86efef7_cb21_48ee_aaaa_549dc94fbd1f.slice/crio-ffc45a7816a6ab72ad411512cae1cf543ef7608b2b33baa6a7e02c47a17de179 WatchSource:0}: Error finding container ffc45a7816a6ab72ad411512cae1cf543ef7608b2b33baa6a7e02c47a17de179: Status 404 returned error can't find the container with id ffc45a7816a6ab72ad411512cae1cf543ef7608b2b33baa6a7e02c47a17de179 Oct 04 05:00:01 crc kubenswrapper[4802]: I1004 05:00:01.031992 4802 generic.go:334] "Generic (PLEG): container finished" podID="a7e85ec1-39eb-4446-955f-b3714e5308af" containerID="bc7e96a58834949b73df6d254b07420d259fa80c61b115e51d92a287417f637c" exitCode=0 Oct 04 05:00:01 crc kubenswrapper[4802]: I1004 05:00:01.032190 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" event={"ID":"a7e85ec1-39eb-4446-955f-b3714e5308af","Type":"ContainerDied","Data":"bc7e96a58834949b73df6d254b07420d259fa80c61b115e51d92a287417f637c"} Oct 04 05:00:01 crc kubenswrapper[4802]: I1004 05:00:01.034572 4802 generic.go:334] "Generic (PLEG): container finished" podID="aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" containerID="3e43c2a7f1a879f9f5666a727655173bb79cb23235451e1a138b48b6d7cf05d1" exitCode=0 Oct 04 05:00:01 crc kubenswrapper[4802]: I1004 05:00:01.034635 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95sfx" event={"ID":"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625","Type":"ContainerDied","Data":"3e43c2a7f1a879f9f5666a727655173bb79cb23235451e1a138b48b6d7cf05d1"} Oct 04 05:00:01 crc kubenswrapper[4802]: I1004 05:00:01.035984 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" event={"ID":"a86efef7-cb21-48ee-aaaa-549dc94fbd1f","Type":"ContainerStarted","Data":"b9f3aaad7d0dcefd40b9bf45027aa5cc5a09c50ccc6fbf0387beab907ec52f2e"} Oct 04 05:00:01 crc kubenswrapper[4802]: I1004 05:00:01.036024 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" event={"ID":"a86efef7-cb21-48ee-aaaa-549dc94fbd1f","Type":"ContainerStarted","Data":"ffc45a7816a6ab72ad411512cae1cf543ef7608b2b33baa6a7e02c47a17de179"} Oct 04 05:00:01 crc kubenswrapper[4802]: I1004 05:00:01.065905 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" podStartSLOduration=1.065879455 podStartE2EDuration="1.065879455s" podCreationTimestamp="2025-10-04 05:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:00:01.064027132 +0000 UTC m=+843.472027757" watchObservedRunningTime="2025-10-04 05:00:01.065879455 +0000 UTC m=+843.473880080" Oct 04 05:00:02 crc kubenswrapper[4802]: I1004 05:00:02.043492 4802 generic.go:334] "Generic (PLEG): container finished" podID="a86efef7-cb21-48ee-aaaa-549dc94fbd1f" containerID="b9f3aaad7d0dcefd40b9bf45027aa5cc5a09c50ccc6fbf0387beab907ec52f2e" exitCode=0 Oct 04 05:00:02 crc kubenswrapper[4802]: I1004 05:00:02.043755 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" event={"ID":"a86efef7-cb21-48ee-aaaa-549dc94fbd1f","Type":"ContainerDied","Data":"b9f3aaad7d0dcefd40b9bf45027aa5cc5a09c50ccc6fbf0387beab907ec52f2e"} Oct 04 05:00:02 crc kubenswrapper[4802]: I1004 05:00:02.338113 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" Oct 04 05:00:02 crc kubenswrapper[4802]: I1004 05:00:02.468230 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7e85ec1-39eb-4446-955f-b3714e5308af-bundle\") pod \"a7e85ec1-39eb-4446-955f-b3714e5308af\" (UID: \"a7e85ec1-39eb-4446-955f-b3714e5308af\") " Oct 04 05:00:02 crc kubenswrapper[4802]: I1004 05:00:02.468306 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7e85ec1-39eb-4446-955f-b3714e5308af-util\") pod \"a7e85ec1-39eb-4446-955f-b3714e5308af\" (UID: \"a7e85ec1-39eb-4446-955f-b3714e5308af\") " Oct 04 05:00:02 crc kubenswrapper[4802]: I1004 05:00:02.468379 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22cfg\" (UniqueName: \"kubernetes.io/projected/a7e85ec1-39eb-4446-955f-b3714e5308af-kube-api-access-22cfg\") pod \"a7e85ec1-39eb-4446-955f-b3714e5308af\" (UID: \"a7e85ec1-39eb-4446-955f-b3714e5308af\") " Oct 04 05:00:02 crc kubenswrapper[4802]: I1004 05:00:02.469679 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e85ec1-39eb-4446-955f-b3714e5308af-bundle" (OuterVolumeSpecName: "bundle") pod "a7e85ec1-39eb-4446-955f-b3714e5308af" (UID: "a7e85ec1-39eb-4446-955f-b3714e5308af"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:00:02 crc kubenswrapper[4802]: I1004 05:00:02.475394 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7e85ec1-39eb-4446-955f-b3714e5308af-kube-api-access-22cfg" (OuterVolumeSpecName: "kube-api-access-22cfg") pod "a7e85ec1-39eb-4446-955f-b3714e5308af" (UID: "a7e85ec1-39eb-4446-955f-b3714e5308af"). InnerVolumeSpecName "kube-api-access-22cfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:00:02 crc kubenswrapper[4802]: I1004 05:00:02.481079 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e85ec1-39eb-4446-955f-b3714e5308af-util" (OuterVolumeSpecName: "util") pod "a7e85ec1-39eb-4446-955f-b3714e5308af" (UID: "a7e85ec1-39eb-4446-955f-b3714e5308af"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:00:02 crc kubenswrapper[4802]: I1004 05:00:02.570386 4802 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7e85ec1-39eb-4446-955f-b3714e5308af-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:02 crc kubenswrapper[4802]: I1004 05:00:02.570445 4802 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7e85ec1-39eb-4446-955f-b3714e5308af-util\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:02 crc kubenswrapper[4802]: I1004 05:00:02.570462 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22cfg\" (UniqueName: \"kubernetes.io/projected/a7e85ec1-39eb-4446-955f-b3714e5308af-kube-api-access-22cfg\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.053246 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" event={"ID":"a7e85ec1-39eb-4446-955f-b3714e5308af","Type":"ContainerDied","Data":"076233d339806a2025912e83a1560a55cb1a09423b42fc65a713e9a0ec716a00"} Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.053281 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr" Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.053297 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="076233d339806a2025912e83a1560a55cb1a09423b42fc65a713e9a0ec716a00" Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.055677 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95sfx" event={"ID":"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625","Type":"ContainerStarted","Data":"12eeabd2040d4ce9f523054b8083d7e3bcd3e7b3cd2e099d7472f71d6463158d"} Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.083907 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-95sfx" podStartSLOduration=2.720684223 podStartE2EDuration="6.083883189s" podCreationTimestamp="2025-10-04 04:59:57 +0000 UTC" firstStartedPulling="2025-10-04 04:59:59.014353318 +0000 UTC m=+841.422353933" lastFinishedPulling="2025-10-04 05:00:02.377552284 +0000 UTC m=+844.785552899" observedRunningTime="2025-10-04 05:00:03.077038915 +0000 UTC m=+845.485039550" watchObservedRunningTime="2025-10-04 05:00:03.083883189 +0000 UTC m=+845.491883814" Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.324055 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.381604 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-config-volume\") pod \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\" (UID: \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\") " Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.381721 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-secret-volume\") pod \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\" (UID: \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\") " Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.381863 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfv56\" (UniqueName: \"kubernetes.io/projected/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-kube-api-access-tfv56\") pod \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\" (UID: \"a86efef7-cb21-48ee-aaaa-549dc94fbd1f\") " Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.383211 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-config-volume" (OuterVolumeSpecName: "config-volume") pod "a86efef7-cb21-48ee-aaaa-549dc94fbd1f" (UID: "a86efef7-cb21-48ee-aaaa-549dc94fbd1f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.388664 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-kube-api-access-tfv56" (OuterVolumeSpecName: "kube-api-access-tfv56") pod "a86efef7-cb21-48ee-aaaa-549dc94fbd1f" (UID: "a86efef7-cb21-48ee-aaaa-549dc94fbd1f"). InnerVolumeSpecName "kube-api-access-tfv56". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.390412 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a86efef7-cb21-48ee-aaaa-549dc94fbd1f" (UID: "a86efef7-cb21-48ee-aaaa-549dc94fbd1f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.483592 4802 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.483634 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfv56\" (UniqueName: \"kubernetes.io/projected/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-kube-api-access-tfv56\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:03 crc kubenswrapper[4802]: I1004 05:00:03.483666 4802 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a86efef7-cb21-48ee-aaaa-549dc94fbd1f-config-volume\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:04 crc kubenswrapper[4802]: I1004 05:00:04.063166 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" Oct 04 05:00:04 crc kubenswrapper[4802]: I1004 05:00:04.063823 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5" event={"ID":"a86efef7-cb21-48ee-aaaa-549dc94fbd1f","Type":"ContainerDied","Data":"ffc45a7816a6ab72ad411512cae1cf543ef7608b2b33baa6a7e02c47a17de179"} Oct 04 05:00:04 crc kubenswrapper[4802]: I1004 05:00:04.063866 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffc45a7816a6ab72ad411512cae1cf543ef7608b2b33baa6a7e02c47a17de179" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.144146 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-ck9p8"] Oct 04 05:00:06 crc kubenswrapper[4802]: E1004 05:00:06.144812 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e85ec1-39eb-4446-955f-b3714e5308af" containerName="pull" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.144830 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e85ec1-39eb-4446-955f-b3714e5308af" containerName="pull" Oct 04 05:00:06 crc kubenswrapper[4802]: E1004 05:00:06.144861 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e85ec1-39eb-4446-955f-b3714e5308af" containerName="util" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.144870 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e85ec1-39eb-4446-955f-b3714e5308af" containerName="util" Oct 04 05:00:06 crc kubenswrapper[4802]: E1004 05:00:06.144884 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a86efef7-cb21-48ee-aaaa-549dc94fbd1f" containerName="collect-profiles" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.144893 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a86efef7-cb21-48ee-aaaa-549dc94fbd1f" containerName="collect-profiles" Oct 04 05:00:06 crc kubenswrapper[4802]: E1004 05:00:06.144905 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e85ec1-39eb-4446-955f-b3714e5308af" containerName="extract" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.144912 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e85ec1-39eb-4446-955f-b3714e5308af" containerName="extract" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.145059 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e85ec1-39eb-4446-955f-b3714e5308af" containerName="extract" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.145075 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="a86efef7-cb21-48ee-aaaa-549dc94fbd1f" containerName="collect-profiles" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.145632 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-ck9p8" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.147739 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.148031 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.148116 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9l7bb" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.158218 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-ck9p8"] Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.217793 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv4kf\" (UniqueName: \"kubernetes.io/projected/bf01160a-8834-4760-b5c6-c6870ac75db3-kube-api-access-kv4kf\") pod \"nmstate-operator-858ddd8f98-ck9p8\" (UID: \"bf01160a-8834-4760-b5c6-c6870ac75db3\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-ck9p8" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.319603 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv4kf\" (UniqueName: \"kubernetes.io/projected/bf01160a-8834-4760-b5c6-c6870ac75db3-kube-api-access-kv4kf\") pod \"nmstate-operator-858ddd8f98-ck9p8\" (UID: \"bf01160a-8834-4760-b5c6-c6870ac75db3\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-ck9p8" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.338135 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv4kf\" (UniqueName: \"kubernetes.io/projected/bf01160a-8834-4760-b5c6-c6870ac75db3-kube-api-access-kv4kf\") pod \"nmstate-operator-858ddd8f98-ck9p8\" (UID: \"bf01160a-8834-4760-b5c6-c6870ac75db3\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-ck9p8" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.463440 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-ck9p8" Oct 04 05:00:06 crc kubenswrapper[4802]: I1004 05:00:06.872694 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-ck9p8"] Oct 04 05:00:07 crc kubenswrapper[4802]: I1004 05:00:07.079226 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-ck9p8" event={"ID":"bf01160a-8834-4760-b5c6-c6870ac75db3","Type":"ContainerStarted","Data":"239ba2195dfa9729b249dee8d24a43c1e3dadf543624074efaf4e6b2688c9681"} Oct 04 05:00:07 crc kubenswrapper[4802]: I1004 05:00:07.897338 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 05:00:07 crc kubenswrapper[4802]: I1004 05:00:07.897390 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 05:00:07 crc kubenswrapper[4802]: I1004 05:00:07.946517 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 05:00:08 crc kubenswrapper[4802]: I1004 05:00:08.130862 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 05:00:10 crc kubenswrapper[4802]: I1004 05:00:10.357319 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-95sfx"] Oct 04 05:00:10 crc kubenswrapper[4802]: I1004 05:00:10.357585 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-95sfx" podUID="aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" containerName="registry-server" containerID="cri-o://12eeabd2040d4ce9f523054b8083d7e3bcd3e7b3cd2e099d7472f71d6463158d" gracePeriod=2 Oct 04 05:00:10 crc kubenswrapper[4802]: I1004 05:00:10.859388 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 05:00:10 crc kubenswrapper[4802]: I1004 05:00:10.885132 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-utilities\") pod \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\" (UID: \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\") " Oct 04 05:00:10 crc kubenswrapper[4802]: I1004 05:00:10.885390 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-utilities" (OuterVolumeSpecName: "utilities") pod "aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" (UID: "aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:00:10 crc kubenswrapper[4802]: I1004 05:00:10.885469 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2zjj\" (UniqueName: \"kubernetes.io/projected/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-kube-api-access-d2zjj\") pod \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\" (UID: \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\") " Oct 04 05:00:10 crc kubenswrapper[4802]: I1004 05:00:10.885845 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-catalog-content\") pod \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\" (UID: \"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625\") " Oct 04 05:00:10 crc kubenswrapper[4802]: I1004 05:00:10.887015 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:10 crc kubenswrapper[4802]: I1004 05:00:10.897491 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-kube-api-access-d2zjj" (OuterVolumeSpecName: "kube-api-access-d2zjj") pod "aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" (UID: "aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625"). InnerVolumeSpecName "kube-api-access-d2zjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:00:10 crc kubenswrapper[4802]: I1004 05:00:10.970687 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" (UID: "aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:00:10 crc kubenswrapper[4802]: I1004 05:00:10.988977 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2zjj\" (UniqueName: \"kubernetes.io/projected/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-kube-api-access-d2zjj\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:10 crc kubenswrapper[4802]: I1004 05:00:10.989026 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.104903 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-ck9p8" event={"ID":"bf01160a-8834-4760-b5c6-c6870ac75db3","Type":"ContainerStarted","Data":"dcadfb29c577b29bfc5a8a95dee15d2efd2b60a114896bde5e2e684763eee627"} Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.107372 4802 generic.go:334] "Generic (PLEG): container finished" podID="aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" containerID="12eeabd2040d4ce9f523054b8083d7e3bcd3e7b3cd2e099d7472f71d6463158d" exitCode=0 Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.107405 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95sfx" event={"ID":"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625","Type":"ContainerDied","Data":"12eeabd2040d4ce9f523054b8083d7e3bcd3e7b3cd2e099d7472f71d6463158d"} Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.107437 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95sfx" event={"ID":"aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625","Type":"ContainerDied","Data":"c90b726449440f2a37837f49d6ed5a708737287f039aff9e3300d65000cef117"} Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.107468 4802 scope.go:117] "RemoveContainer" containerID="12eeabd2040d4ce9f523054b8083d7e3bcd3e7b3cd2e099d7472f71d6463158d" Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.107459 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-95sfx" Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.122362 4802 scope.go:117] "RemoveContainer" containerID="3e43c2a7f1a879f9f5666a727655173bb79cb23235451e1a138b48b6d7cf05d1" Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.128159 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-ck9p8" podStartSLOduration=1.168570091 podStartE2EDuration="5.128142366s" podCreationTimestamp="2025-10-04 05:00:06 +0000 UTC" firstStartedPulling="2025-10-04 05:00:06.880714196 +0000 UTC m=+849.288714821" lastFinishedPulling="2025-10-04 05:00:10.840286471 +0000 UTC m=+853.248287096" observedRunningTime="2025-10-04 05:00:11.123780422 +0000 UTC m=+853.531781057" watchObservedRunningTime="2025-10-04 05:00:11.128142366 +0000 UTC m=+853.536142991" Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.145613 4802 scope.go:117] "RemoveContainer" containerID="b2d3d36fd333a39841da999bcd74a4da52880fe4ca09ebfec2b1a3cebdd89158" Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.146080 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-95sfx"] Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.148693 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-95sfx"] Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.163711 4802 scope.go:117] "RemoveContainer" containerID="12eeabd2040d4ce9f523054b8083d7e3bcd3e7b3cd2e099d7472f71d6463158d" Oct 04 05:00:11 crc kubenswrapper[4802]: E1004 05:00:11.164260 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12eeabd2040d4ce9f523054b8083d7e3bcd3e7b3cd2e099d7472f71d6463158d\": container with ID starting with 12eeabd2040d4ce9f523054b8083d7e3bcd3e7b3cd2e099d7472f71d6463158d not found: ID does not exist" containerID="12eeabd2040d4ce9f523054b8083d7e3bcd3e7b3cd2e099d7472f71d6463158d" Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.164367 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12eeabd2040d4ce9f523054b8083d7e3bcd3e7b3cd2e099d7472f71d6463158d"} err="failed to get container status \"12eeabd2040d4ce9f523054b8083d7e3bcd3e7b3cd2e099d7472f71d6463158d\": rpc error: code = NotFound desc = could not find container \"12eeabd2040d4ce9f523054b8083d7e3bcd3e7b3cd2e099d7472f71d6463158d\": container with ID starting with 12eeabd2040d4ce9f523054b8083d7e3bcd3e7b3cd2e099d7472f71d6463158d not found: ID does not exist" Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.164482 4802 scope.go:117] "RemoveContainer" containerID="3e43c2a7f1a879f9f5666a727655173bb79cb23235451e1a138b48b6d7cf05d1" Oct 04 05:00:11 crc kubenswrapper[4802]: E1004 05:00:11.164903 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e43c2a7f1a879f9f5666a727655173bb79cb23235451e1a138b48b6d7cf05d1\": container with ID starting with 3e43c2a7f1a879f9f5666a727655173bb79cb23235451e1a138b48b6d7cf05d1 not found: ID does not exist" containerID="3e43c2a7f1a879f9f5666a727655173bb79cb23235451e1a138b48b6d7cf05d1" Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.164930 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e43c2a7f1a879f9f5666a727655173bb79cb23235451e1a138b48b6d7cf05d1"} err="failed to get container status \"3e43c2a7f1a879f9f5666a727655173bb79cb23235451e1a138b48b6d7cf05d1\": rpc error: code = NotFound desc = could not find container \"3e43c2a7f1a879f9f5666a727655173bb79cb23235451e1a138b48b6d7cf05d1\": container with ID starting with 3e43c2a7f1a879f9f5666a727655173bb79cb23235451e1a138b48b6d7cf05d1 not found: ID does not exist" Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.164947 4802 scope.go:117] "RemoveContainer" containerID="b2d3d36fd333a39841da999bcd74a4da52880fe4ca09ebfec2b1a3cebdd89158" Oct 04 05:00:11 crc kubenswrapper[4802]: E1004 05:00:11.165199 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2d3d36fd333a39841da999bcd74a4da52880fe4ca09ebfec2b1a3cebdd89158\": container with ID starting with b2d3d36fd333a39841da999bcd74a4da52880fe4ca09ebfec2b1a3cebdd89158 not found: ID does not exist" containerID="b2d3d36fd333a39841da999bcd74a4da52880fe4ca09ebfec2b1a3cebdd89158" Oct 04 05:00:11 crc kubenswrapper[4802]: I1004 05:00:11.165286 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2d3d36fd333a39841da999bcd74a4da52880fe4ca09ebfec2b1a3cebdd89158"} err="failed to get container status \"b2d3d36fd333a39841da999bcd74a4da52880fe4ca09ebfec2b1a3cebdd89158\": rpc error: code = NotFound desc = could not find container \"b2d3d36fd333a39841da999bcd74a4da52880fe4ca09ebfec2b1a3cebdd89158\": container with ID starting with b2d3d36fd333a39841da999bcd74a4da52880fe4ca09ebfec2b1a3cebdd89158 not found: ID does not exist" Oct 04 05:00:12 crc kubenswrapper[4802]: I1004 05:00:12.366868 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" path="/var/lib/kubelet/pods/aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625/volumes" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.024471 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-6sht9"] Oct 04 05:00:16 crc kubenswrapper[4802]: E1004 05:00:16.025119 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" containerName="registry-server" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.025137 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" containerName="registry-server" Oct 04 05:00:16 crc kubenswrapper[4802]: E1004 05:00:16.025156 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" containerName="extract-utilities" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.025164 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" containerName="extract-utilities" Oct 04 05:00:16 crc kubenswrapper[4802]: E1004 05:00:16.025186 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" containerName="extract-content" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.025194 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" containerName="extract-content" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.025341 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4d48d2-05fe-4cc7-b0a2-3b3f8a82b625" containerName="registry-server" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.026150 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-6sht9" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.028167 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-zl8t9" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.043475 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-6sht9"] Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.043524 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr"] Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.044173 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.047988 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.061723 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pj4r\" (UniqueName: \"kubernetes.io/projected/4123c7f6-6452-4dc2-a07d-d0603691c48e-kube-api-access-8pj4r\") pod \"nmstate-metrics-fdff9cb8d-6sht9\" (UID: \"4123c7f6-6452-4dc2-a07d-d0603691c48e\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-6sht9" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.067218 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-fhr6c"] Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.068035 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.080057 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr"] Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.163104 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6a5c2438-a3fd-493a-bc06-dee5dfe74fac-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-fcwvr\" (UID: \"6a5c2438-a3fd-493a-bc06-dee5dfe74fac\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.163500 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pj4r\" (UniqueName: \"kubernetes.io/projected/4123c7f6-6452-4dc2-a07d-d0603691c48e-kube-api-access-8pj4r\") pod \"nmstate-metrics-fdff9cb8d-6sht9\" (UID: \"4123c7f6-6452-4dc2-a07d-d0603691c48e\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-6sht9" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.163551 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/158e0059-2885-435b-bd19-1f6208a33f36-ovs-socket\") pod \"nmstate-handler-fhr6c\" (UID: \"158e0059-2885-435b-bd19-1f6208a33f36\") " pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.163584 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frjqw\" (UniqueName: \"kubernetes.io/projected/158e0059-2885-435b-bd19-1f6208a33f36-kube-api-access-frjqw\") pod \"nmstate-handler-fhr6c\" (UID: \"158e0059-2885-435b-bd19-1f6208a33f36\") " pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.163609 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9867\" (UniqueName: \"kubernetes.io/projected/6a5c2438-a3fd-493a-bc06-dee5dfe74fac-kube-api-access-b9867\") pod \"nmstate-webhook-6cdbc54649-fcwvr\" (UID: \"6a5c2438-a3fd-493a-bc06-dee5dfe74fac\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.163654 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/158e0059-2885-435b-bd19-1f6208a33f36-nmstate-lock\") pod \"nmstate-handler-fhr6c\" (UID: \"158e0059-2885-435b-bd19-1f6208a33f36\") " pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.163687 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/158e0059-2885-435b-bd19-1f6208a33f36-dbus-socket\") pod \"nmstate-handler-fhr6c\" (UID: \"158e0059-2885-435b-bd19-1f6208a33f36\") " pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.191094 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b"] Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.192294 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.195382 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pj4r\" (UniqueName: \"kubernetes.io/projected/4123c7f6-6452-4dc2-a07d-d0603691c48e-kube-api-access-8pj4r\") pod \"nmstate-metrics-fdff9cb8d-6sht9\" (UID: \"4123c7f6-6452-4dc2-a07d-d0603691c48e\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-6sht9" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.197796 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.197796 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.197800 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-bgghj" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.215077 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b"] Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.264768 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d4h2\" (UniqueName: \"kubernetes.io/projected/5a599523-a3a9-4820-9370-59a99fa3e327-kube-api-access-7d4h2\") pod \"nmstate-console-plugin-6b874cbd85-bcj7b\" (UID: \"5a599523-a3a9-4820-9370-59a99fa3e327\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.264853 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/158e0059-2885-435b-bd19-1f6208a33f36-ovs-socket\") pod \"nmstate-handler-fhr6c\" (UID: \"158e0059-2885-435b-bd19-1f6208a33f36\") " pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.264885 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frjqw\" (UniqueName: \"kubernetes.io/projected/158e0059-2885-435b-bd19-1f6208a33f36-kube-api-access-frjqw\") pod \"nmstate-handler-fhr6c\" (UID: \"158e0059-2885-435b-bd19-1f6208a33f36\") " pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.264911 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9867\" (UniqueName: \"kubernetes.io/projected/6a5c2438-a3fd-493a-bc06-dee5dfe74fac-kube-api-access-b9867\") pod \"nmstate-webhook-6cdbc54649-fcwvr\" (UID: \"6a5c2438-a3fd-493a-bc06-dee5dfe74fac\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.264936 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/158e0059-2885-435b-bd19-1f6208a33f36-nmstate-lock\") pod \"nmstate-handler-fhr6c\" (UID: \"158e0059-2885-435b-bd19-1f6208a33f36\") " pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.264959 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/158e0059-2885-435b-bd19-1f6208a33f36-dbus-socket\") pod \"nmstate-handler-fhr6c\" (UID: \"158e0059-2885-435b-bd19-1f6208a33f36\") " pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.264994 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a599523-a3a9-4820-9370-59a99fa3e327-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-bcj7b\" (UID: \"5a599523-a3a9-4820-9370-59a99fa3e327\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.264991 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/158e0059-2885-435b-bd19-1f6208a33f36-ovs-socket\") pod \"nmstate-handler-fhr6c\" (UID: \"158e0059-2885-435b-bd19-1f6208a33f36\") " pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.265023 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a599523-a3a9-4820-9370-59a99fa3e327-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-bcj7b\" (UID: \"5a599523-a3a9-4820-9370-59a99fa3e327\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.265075 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/158e0059-2885-435b-bd19-1f6208a33f36-nmstate-lock\") pod \"nmstate-handler-fhr6c\" (UID: \"158e0059-2885-435b-bd19-1f6208a33f36\") " pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.265160 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6a5c2438-a3fd-493a-bc06-dee5dfe74fac-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-fcwvr\" (UID: \"6a5c2438-a3fd-493a-bc06-dee5dfe74fac\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.265565 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/158e0059-2885-435b-bd19-1f6208a33f36-dbus-socket\") pod \"nmstate-handler-fhr6c\" (UID: \"158e0059-2885-435b-bd19-1f6208a33f36\") " pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.270495 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6a5c2438-a3fd-493a-bc06-dee5dfe74fac-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-fcwvr\" (UID: \"6a5c2438-a3fd-493a-bc06-dee5dfe74fac\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.283373 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frjqw\" (UniqueName: \"kubernetes.io/projected/158e0059-2885-435b-bd19-1f6208a33f36-kube-api-access-frjqw\") pod \"nmstate-handler-fhr6c\" (UID: \"158e0059-2885-435b-bd19-1f6208a33f36\") " pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.283614 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9867\" (UniqueName: \"kubernetes.io/projected/6a5c2438-a3fd-493a-bc06-dee5dfe74fac-kube-api-access-b9867\") pod \"nmstate-webhook-6cdbc54649-fcwvr\" (UID: \"6a5c2438-a3fd-493a-bc06-dee5dfe74fac\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.366051 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a599523-a3a9-4820-9370-59a99fa3e327-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-bcj7b\" (UID: \"5a599523-a3a9-4820-9370-59a99fa3e327\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.366102 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a599523-a3a9-4820-9370-59a99fa3e327-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-bcj7b\" (UID: \"5a599523-a3a9-4820-9370-59a99fa3e327\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.366157 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d4h2\" (UniqueName: \"kubernetes.io/projected/5a599523-a3a9-4820-9370-59a99fa3e327-kube-api-access-7d4h2\") pod \"nmstate-console-plugin-6b874cbd85-bcj7b\" (UID: \"5a599523-a3a9-4820-9370-59a99fa3e327\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" Oct 04 05:00:16 crc kubenswrapper[4802]: E1004 05:00:16.366251 4802 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Oct 04 05:00:16 crc kubenswrapper[4802]: E1004 05:00:16.366351 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a599523-a3a9-4820-9370-59a99fa3e327-plugin-serving-cert podName:5a599523-a3a9-4820-9370-59a99fa3e327 nodeName:}" failed. No retries permitted until 2025-10-04 05:00:16.866329371 +0000 UTC m=+859.274329996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/5a599523-a3a9-4820-9370-59a99fa3e327-plugin-serving-cert") pod "nmstate-console-plugin-6b874cbd85-bcj7b" (UID: "5a599523-a3a9-4820-9370-59a99fa3e327") : secret "plugin-serving-cert" not found Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.368118 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a599523-a3a9-4820-9370-59a99fa3e327-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-bcj7b\" (UID: \"5a599523-a3a9-4820-9370-59a99fa3e327\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.381704 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6fd886b559-jn29s"] Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.382917 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.391257 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d4h2\" (UniqueName: \"kubernetes.io/projected/5a599523-a3a9-4820-9370-59a99fa3e327-kube-api-access-7d4h2\") pod \"nmstate-console-plugin-6b874cbd85-bcj7b\" (UID: \"5a599523-a3a9-4820-9370-59a99fa3e327\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.401023 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-6sht9" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.402092 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fd886b559-jn29s"] Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.416741 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.425686 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.467136 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-console-config\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.467261 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-console-serving-cert\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.467311 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-console-oauth-config\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.467338 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkknb\" (UniqueName: \"kubernetes.io/projected/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-kube-api-access-rkknb\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.467440 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-service-ca\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.467474 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-oauth-serving-cert\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.467504 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-trusted-ca-bundle\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: W1004 05:00:16.478715 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod158e0059_2885_435b_bd19_1f6208a33f36.slice/crio-868711ed8a0d79b585737b576de753172df3ea146e68be11eae6fa9ff6bbadf4 WatchSource:0}: Error finding container 868711ed8a0d79b585737b576de753172df3ea146e68be11eae6fa9ff6bbadf4: Status 404 returned error can't find the container with id 868711ed8a0d79b585737b576de753172df3ea146e68be11eae6fa9ff6bbadf4 Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.569201 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-trusted-ca-bundle\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.569285 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-console-config\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.569343 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-console-serving-cert\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.569386 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-console-oauth-config\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.569410 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkknb\" (UniqueName: \"kubernetes.io/projected/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-kube-api-access-rkknb\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.569462 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-service-ca\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.569484 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-oauth-serving-cert\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.570670 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-oauth-serving-cert\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.570683 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-console-config\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.570670 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-trusted-ca-bundle\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.571301 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-service-ca\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.574593 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-console-oauth-config\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.574976 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-console-serving-cert\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.591232 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkknb\" (UniqueName: \"kubernetes.io/projected/a85a00c1-4c10-4ef2-9bb0-4cfdccff2744-kube-api-access-rkknb\") pod \"console-6fd886b559-jn29s\" (UID: \"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744\") " pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.707924 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.721767 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr"] Oct 04 05:00:16 crc kubenswrapper[4802]: W1004 05:00:16.731141 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a5c2438_a3fd_493a_bc06_dee5dfe74fac.slice/crio-5ab692149f255426b0437976ea4cc9eca91dc09153380f7c461a97ef84ead149 WatchSource:0}: Error finding container 5ab692149f255426b0437976ea4cc9eca91dc09153380f7c461a97ef84ead149: Status 404 returned error can't find the container with id 5ab692149f255426b0437976ea4cc9eca91dc09153380f7c461a97ef84ead149 Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.873502 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a599523-a3a9-4820-9370-59a99fa3e327-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-bcj7b\" (UID: \"5a599523-a3a9-4820-9370-59a99fa3e327\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.877442 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-6sht9"] Oct 04 05:00:16 crc kubenswrapper[4802]: I1004 05:00:16.878845 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a599523-a3a9-4820-9370-59a99fa3e327-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-bcj7b\" (UID: \"5a599523-a3a9-4820-9370-59a99fa3e327\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" Oct 04 05:00:16 crc kubenswrapper[4802]: W1004 05:00:16.880942 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4123c7f6_6452_4dc2_a07d_d0603691c48e.slice/crio-c957c67a3e5b6b5b0657299c074e2c7f7ccff847dabe0e0f55a2682c7cacee06 WatchSource:0}: Error finding container c957c67a3e5b6b5b0657299c074e2c7f7ccff847dabe0e0f55a2682c7cacee06: Status 404 returned error can't find the container with id c957c67a3e5b6b5b0657299c074e2c7f7ccff847dabe0e0f55a2682c7cacee06 Oct 04 05:00:17 crc kubenswrapper[4802]: I1004 05:00:17.102629 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fd886b559-jn29s"] Oct 04 05:00:17 crc kubenswrapper[4802]: W1004 05:00:17.110558 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda85a00c1_4c10_4ef2_9bb0_4cfdccff2744.slice/crio-b5d517f195ae0e1474e21b5abc1b4f786e1870df331a617b4eb3464b9ae0d0cf WatchSource:0}: Error finding container b5d517f195ae0e1474e21b5abc1b4f786e1870df331a617b4eb3464b9ae0d0cf: Status 404 returned error can't find the container with id b5d517f195ae0e1474e21b5abc1b4f786e1870df331a617b4eb3464b9ae0d0cf Oct 04 05:00:17 crc kubenswrapper[4802]: I1004 05:00:17.142349 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" Oct 04 05:00:17 crc kubenswrapper[4802]: I1004 05:00:17.148606 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fhr6c" event={"ID":"158e0059-2885-435b-bd19-1f6208a33f36","Type":"ContainerStarted","Data":"868711ed8a0d79b585737b576de753172df3ea146e68be11eae6fa9ff6bbadf4"} Oct 04 05:00:17 crc kubenswrapper[4802]: I1004 05:00:17.150093 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr" event={"ID":"6a5c2438-a3fd-493a-bc06-dee5dfe74fac","Type":"ContainerStarted","Data":"5ab692149f255426b0437976ea4cc9eca91dc09153380f7c461a97ef84ead149"} Oct 04 05:00:17 crc kubenswrapper[4802]: I1004 05:00:17.152414 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-6sht9" event={"ID":"4123c7f6-6452-4dc2-a07d-d0603691c48e","Type":"ContainerStarted","Data":"c957c67a3e5b6b5b0657299c074e2c7f7ccff847dabe0e0f55a2682c7cacee06"} Oct 04 05:00:17 crc kubenswrapper[4802]: I1004 05:00:17.153358 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fd886b559-jn29s" event={"ID":"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744","Type":"ContainerStarted","Data":"b5d517f195ae0e1474e21b5abc1b4f786e1870df331a617b4eb3464b9ae0d0cf"} Oct 04 05:00:17 crc kubenswrapper[4802]: I1004 05:00:17.586277 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b"] Oct 04 05:00:18 crc kubenswrapper[4802]: I1004 05:00:18.162405 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" event={"ID":"5a599523-a3a9-4820-9370-59a99fa3e327","Type":"ContainerStarted","Data":"ba9ab1b6eea554e6c19509759d4896db771cc0b381dd95525189f2bd0a40d031"} Oct 04 05:00:18 crc kubenswrapper[4802]: I1004 05:00:18.164788 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fd886b559-jn29s" event={"ID":"a85a00c1-4c10-4ef2-9bb0-4cfdccff2744","Type":"ContainerStarted","Data":"ae5b173551345cccd69d9c9bd7409cfcbfc3cab1a1e06e52f69a4f6dc3570430"} Oct 04 05:00:18 crc kubenswrapper[4802]: I1004 05:00:18.386972 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6fd886b559-jn29s" podStartSLOduration=2.386949599 podStartE2EDuration="2.386949599s" podCreationTimestamp="2025-10-04 05:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:00:18.188416573 +0000 UTC m=+860.596417208" watchObservedRunningTime="2025-10-04 05:00:18.386949599 +0000 UTC m=+860.794950224" Oct 04 05:00:22 crc kubenswrapper[4802]: I1004 05:00:22.662442 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:00:22 crc kubenswrapper[4802]: I1004 05:00:22.662818 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:00:24 crc kubenswrapper[4802]: I1004 05:00:24.204592 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-6sht9" event={"ID":"4123c7f6-6452-4dc2-a07d-d0603691c48e","Type":"ContainerStarted","Data":"2a5a13855573961f50e40ff401d98dcc64126a98f3d2f11d8da2344946c7863f"} Oct 04 05:00:24 crc kubenswrapper[4802]: I1004 05:00:24.206483 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fhr6c" event={"ID":"158e0059-2885-435b-bd19-1f6208a33f36","Type":"ContainerStarted","Data":"bf1956301b9add14c66ae57b2e25173c349b3a12ea9f68aac30b8bd1a584b3b9"} Oct 04 05:00:24 crc kubenswrapper[4802]: I1004 05:00:24.206851 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:24 crc kubenswrapper[4802]: I1004 05:00:24.210902 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr" event={"ID":"6a5c2438-a3fd-493a-bc06-dee5dfe74fac","Type":"ContainerStarted","Data":"917a00d31e0a58d32ff0d09b2114f691cd7c9bb4174b73300996f929b75af6fa"} Oct 04 05:00:24 crc kubenswrapper[4802]: I1004 05:00:24.211134 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr" Oct 04 05:00:24 crc kubenswrapper[4802]: I1004 05:00:24.221570 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-fhr6c" podStartSLOduration=1.427826856 podStartE2EDuration="8.221553773s" podCreationTimestamp="2025-10-04 05:00:16 +0000 UTC" firstStartedPulling="2025-10-04 05:00:16.48375889 +0000 UTC m=+858.891759515" lastFinishedPulling="2025-10-04 05:00:23.277485797 +0000 UTC m=+865.685486432" observedRunningTime="2025-10-04 05:00:24.2204096 +0000 UTC m=+866.628410245" watchObservedRunningTime="2025-10-04 05:00:24.221553773 +0000 UTC m=+866.629554398" Oct 04 05:00:24 crc kubenswrapper[4802]: I1004 05:00:24.247530 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr" podStartSLOduration=1.682007274 podStartE2EDuration="8.247502931s" podCreationTimestamp="2025-10-04 05:00:16 +0000 UTC" firstStartedPulling="2025-10-04 05:00:16.737447754 +0000 UTC m=+859.145448389" lastFinishedPulling="2025-10-04 05:00:23.302943381 +0000 UTC m=+865.710944046" observedRunningTime="2025-10-04 05:00:24.237690002 +0000 UTC m=+866.645690657" watchObservedRunningTime="2025-10-04 05:00:24.247502931 +0000 UTC m=+866.655503556" Oct 04 05:00:25 crc kubenswrapper[4802]: I1004 05:00:25.221727 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" event={"ID":"5a599523-a3a9-4820-9370-59a99fa3e327","Type":"ContainerStarted","Data":"4587083218a97f070e92fcc99933227a84854c6c3ea5db90b34ba0ea2420a39f"} Oct 04 05:00:26 crc kubenswrapper[4802]: I1004 05:00:26.709883 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:26 crc kubenswrapper[4802]: I1004 05:00:26.710277 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:26 crc kubenswrapper[4802]: I1004 05:00:26.714464 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:26 crc kubenswrapper[4802]: I1004 05:00:26.735407 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-bcj7b" podStartSLOduration=4.306151055 podStartE2EDuration="10.735377946s" podCreationTimestamp="2025-10-04 05:00:16 +0000 UTC" firstStartedPulling="2025-10-04 05:00:17.594745643 +0000 UTC m=+860.002746268" lastFinishedPulling="2025-10-04 05:00:24.023972534 +0000 UTC m=+866.431973159" observedRunningTime="2025-10-04 05:00:25.243336838 +0000 UTC m=+867.651337463" watchObservedRunningTime="2025-10-04 05:00:26.735377946 +0000 UTC m=+869.143378591" Oct 04 05:00:27 crc kubenswrapper[4802]: I1004 05:00:27.249234 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6fd886b559-jn29s" Oct 04 05:00:27 crc kubenswrapper[4802]: I1004 05:00:27.309547 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6fwp2"] Oct 04 05:00:28 crc kubenswrapper[4802]: I1004 05:00:28.255237 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-6sht9" event={"ID":"4123c7f6-6452-4dc2-a07d-d0603691c48e","Type":"ContainerStarted","Data":"01620ce17e31932bf24b9c4cdd3063b598ce245d72660d42e7a5375bfea75c12"} Oct 04 05:00:31 crc kubenswrapper[4802]: I1004 05:00:31.449901 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-fhr6c" Oct 04 05:00:31 crc kubenswrapper[4802]: I1004 05:00:31.470816 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-6sht9" podStartSLOduration=4.652890335 podStartE2EDuration="15.470790064s" podCreationTimestamp="2025-10-04 05:00:16 +0000 UTC" firstStartedPulling="2025-10-04 05:00:16.883417145 +0000 UTC m=+859.291417770" lastFinishedPulling="2025-10-04 05:00:27.701316874 +0000 UTC m=+870.109317499" observedRunningTime="2025-10-04 05:00:29.286537881 +0000 UTC m=+871.694538526" watchObservedRunningTime="2025-10-04 05:00:31.470790064 +0000 UTC m=+873.878790689" Oct 04 05:00:36 crc kubenswrapper[4802]: I1004 05:00:36.424949 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-fcwvr" Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.014596 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m"] Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.017865 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.024361 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.024962 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m"] Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.147854 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvj6z\" (UniqueName: \"kubernetes.io/projected/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-kube-api-access-bvj6z\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m\" (UID: \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.147934 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m\" (UID: \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.147973 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m\" (UID: \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.249940 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvj6z\" (UniqueName: \"kubernetes.io/projected/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-kube-api-access-bvj6z\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m\" (UID: \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.249996 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m\" (UID: \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.250016 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m\" (UID: \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.250693 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m\" (UID: \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.251127 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m\" (UID: \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.279120 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvj6z\" (UniqueName: \"kubernetes.io/projected/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-kube-api-access-bvj6z\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m\" (UID: \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.339550 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" Oct 04 05:00:50 crc kubenswrapper[4802]: I1004 05:00:50.656105 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m"] Oct 04 05:00:51 crc kubenswrapper[4802]: I1004 05:00:51.403571 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" event={"ID":"ad46a69d-5845-4fc3-861b-3d8ebd4106c6","Type":"ContainerStarted","Data":"85a808a207f1d03f315d037a6536c00be5421ddb7d91733c907cfaf6544b03b6"} Oct 04 05:00:51 crc kubenswrapper[4802]: I1004 05:00:51.403636 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" event={"ID":"ad46a69d-5845-4fc3-861b-3d8ebd4106c6","Type":"ContainerStarted","Data":"7c3aa9c088deec9c588149e2805896179db81d3e277b277b8364a4b5ca67fe2e"} Oct 04 05:00:52 crc kubenswrapper[4802]: I1004 05:00:52.351508 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-6fwp2" podUID="64860eca-743c-423a-8ee4-a1e5fd4f667d" containerName="console" containerID="cri-o://3597a482c8f4870ee16809ad1f87cf28b48145ff405d76c9c7490dcd889dde69" gracePeriod=15 Oct 04 05:00:52 crc kubenswrapper[4802]: I1004 05:00:52.412562 4802 generic.go:334] "Generic (PLEG): container finished" podID="ad46a69d-5845-4fc3-861b-3d8ebd4106c6" containerID="85a808a207f1d03f315d037a6536c00be5421ddb7d91733c907cfaf6544b03b6" exitCode=0 Oct 04 05:00:52 crc kubenswrapper[4802]: I1004 05:00:52.412634 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" event={"ID":"ad46a69d-5845-4fc3-861b-3d8ebd4106c6","Type":"ContainerDied","Data":"85a808a207f1d03f315d037a6536c00be5421ddb7d91733c907cfaf6544b03b6"} Oct 04 05:00:52 crc kubenswrapper[4802]: I1004 05:00:52.668615 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:00:52 crc kubenswrapper[4802]: I1004 05:00:52.668752 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:00:52 crc kubenswrapper[4802]: I1004 05:00:52.668832 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 05:00:52 crc kubenswrapper[4802]: I1004 05:00:52.669920 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4bf3a67a3aced7a776f95ac83df345cb7b69786ce2cff835d8681589db22e4b4"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 05:00:52 crc kubenswrapper[4802]: I1004 05:00:52.670029 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://4bf3a67a3aced7a776f95ac83df345cb7b69786ce2cff835d8681589db22e4b4" gracePeriod=600 Oct 04 05:00:52 crc kubenswrapper[4802]: I1004 05:00:52.868693 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6fwp2_64860eca-743c-423a-8ee4-a1e5fd4f667d/console/0.log" Oct 04 05:00:52 crc kubenswrapper[4802]: I1004 05:00:52.868889 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.002349 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-trusted-ca-bundle\") pod \"64860eca-743c-423a-8ee4-a1e5fd4f667d\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.002454 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-serving-cert\") pod \"64860eca-743c-423a-8ee4-a1e5fd4f667d\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.002487 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-oauth-config\") pod \"64860eca-743c-423a-8ee4-a1e5fd4f667d\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.002513 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-config\") pod \"64860eca-743c-423a-8ee4-a1e5fd4f667d\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.002570 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-oauth-serving-cert\") pod \"64860eca-743c-423a-8ee4-a1e5fd4f667d\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.002590 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-service-ca\") pod \"64860eca-743c-423a-8ee4-a1e5fd4f667d\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.002621 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrvzl\" (UniqueName: \"kubernetes.io/projected/64860eca-743c-423a-8ee4-a1e5fd4f667d-kube-api-access-rrvzl\") pod \"64860eca-743c-423a-8ee4-a1e5fd4f667d\" (UID: \"64860eca-743c-423a-8ee4-a1e5fd4f667d\") " Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.003601 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-config" (OuterVolumeSpecName: "console-config") pod "64860eca-743c-423a-8ee4-a1e5fd4f667d" (UID: "64860eca-743c-423a-8ee4-a1e5fd4f667d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.003753 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-service-ca" (OuterVolumeSpecName: "service-ca") pod "64860eca-743c-423a-8ee4-a1e5fd4f667d" (UID: "64860eca-743c-423a-8ee4-a1e5fd4f667d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.003861 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "64860eca-743c-423a-8ee4-a1e5fd4f667d" (UID: "64860eca-743c-423a-8ee4-a1e5fd4f667d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.004205 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "64860eca-743c-423a-8ee4-a1e5fd4f667d" (UID: "64860eca-743c-423a-8ee4-a1e5fd4f667d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.009319 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "64860eca-743c-423a-8ee4-a1e5fd4f667d" (UID: "64860eca-743c-423a-8ee4-a1e5fd4f667d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.009626 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64860eca-743c-423a-8ee4-a1e5fd4f667d-kube-api-access-rrvzl" (OuterVolumeSpecName: "kube-api-access-rrvzl") pod "64860eca-743c-423a-8ee4-a1e5fd4f667d" (UID: "64860eca-743c-423a-8ee4-a1e5fd4f667d"). InnerVolumeSpecName "kube-api-access-rrvzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.009777 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "64860eca-743c-423a-8ee4-a1e5fd4f667d" (UID: "64860eca-743c-423a-8ee4-a1e5fd4f667d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.104578 4802 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.104696 4802 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.104711 4802 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.104721 4802 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-console-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.104736 4802 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.104745 4802 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/64860eca-743c-423a-8ee4-a1e5fd4f667d-service-ca\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.104753 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrvzl\" (UniqueName: \"kubernetes.io/projected/64860eca-743c-423a-8ee4-a1e5fd4f667d-kube-api-access-rrvzl\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.425885 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="4bf3a67a3aced7a776f95ac83df345cb7b69786ce2cff835d8681589db22e4b4" exitCode=0 Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.425979 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"4bf3a67a3aced7a776f95ac83df345cb7b69786ce2cff835d8681589db22e4b4"} Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.426044 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"be240c6f7c9da0768b330ef7604de12df37604afd0ee9a212f9d7f4a15105260"} Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.426071 4802 scope.go:117] "RemoveContainer" containerID="42b73da467217af68ceddf6b21be981add88cb16a8423f2d9a33aa563c1a7caf" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.430163 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6fwp2_64860eca-743c-423a-8ee4-a1e5fd4f667d/console/0.log" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.430582 4802 generic.go:334] "Generic (PLEG): container finished" podID="64860eca-743c-423a-8ee4-a1e5fd4f667d" containerID="3597a482c8f4870ee16809ad1f87cf28b48145ff405d76c9c7490dcd889dde69" exitCode=2 Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.430653 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6fwp2" event={"ID":"64860eca-743c-423a-8ee4-a1e5fd4f667d","Type":"ContainerDied","Data":"3597a482c8f4870ee16809ad1f87cf28b48145ff405d76c9c7490dcd889dde69"} Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.430817 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6fwp2" event={"ID":"64860eca-743c-423a-8ee4-a1e5fd4f667d","Type":"ContainerDied","Data":"2c860be41e73028e35d7e6d250f91451f7a0ca1090f077940c2f31cd18563b76"} Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.430676 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6fwp2" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.466209 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6fwp2"] Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.478582 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-6fwp2"] Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.854166 4802 scope.go:117] "RemoveContainer" containerID="3597a482c8f4870ee16809ad1f87cf28b48145ff405d76c9c7490dcd889dde69" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.877504 4802 scope.go:117] "RemoveContainer" containerID="3597a482c8f4870ee16809ad1f87cf28b48145ff405d76c9c7490dcd889dde69" Oct 04 05:00:53 crc kubenswrapper[4802]: E1004 05:00:53.878063 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3597a482c8f4870ee16809ad1f87cf28b48145ff405d76c9c7490dcd889dde69\": container with ID starting with 3597a482c8f4870ee16809ad1f87cf28b48145ff405d76c9c7490dcd889dde69 not found: ID does not exist" containerID="3597a482c8f4870ee16809ad1f87cf28b48145ff405d76c9c7490dcd889dde69" Oct 04 05:00:53 crc kubenswrapper[4802]: I1004 05:00:53.878142 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3597a482c8f4870ee16809ad1f87cf28b48145ff405d76c9c7490dcd889dde69"} err="failed to get container status \"3597a482c8f4870ee16809ad1f87cf28b48145ff405d76c9c7490dcd889dde69\": rpc error: code = NotFound desc = could not find container \"3597a482c8f4870ee16809ad1f87cf28b48145ff405d76c9c7490dcd889dde69\": container with ID starting with 3597a482c8f4870ee16809ad1f87cf28b48145ff405d76c9c7490dcd889dde69 not found: ID does not exist" Oct 04 05:00:54 crc kubenswrapper[4802]: I1004 05:00:54.370023 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64860eca-743c-423a-8ee4-a1e5fd4f667d" path="/var/lib/kubelet/pods/64860eca-743c-423a-8ee4-a1e5fd4f667d/volumes" Oct 04 05:00:55 crc kubenswrapper[4802]: I1004 05:00:55.448016 4802 generic.go:334] "Generic (PLEG): container finished" podID="ad46a69d-5845-4fc3-861b-3d8ebd4106c6" containerID="cd8dcfb3d6f28c6d7b5579c451f03fbddd26d7c6c4f84088bf25a2dd75ee410c" exitCode=0 Oct 04 05:00:55 crc kubenswrapper[4802]: I1004 05:00:55.448056 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" event={"ID":"ad46a69d-5845-4fc3-861b-3d8ebd4106c6","Type":"ContainerDied","Data":"cd8dcfb3d6f28c6d7b5579c451f03fbddd26d7c6c4f84088bf25a2dd75ee410c"} Oct 04 05:00:56 crc kubenswrapper[4802]: I1004 05:00:56.472993 4802 generic.go:334] "Generic (PLEG): container finished" podID="ad46a69d-5845-4fc3-861b-3d8ebd4106c6" containerID="19f83251680a42be2d5a491dd3d776791e734a0defb2108d247db7a7b2bbd516" exitCode=0 Oct 04 05:00:56 crc kubenswrapper[4802]: I1004 05:00:56.473045 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" event={"ID":"ad46a69d-5845-4fc3-861b-3d8ebd4106c6","Type":"ContainerDied","Data":"19f83251680a42be2d5a491dd3d776791e734a0defb2108d247db7a7b2bbd516"} Oct 04 05:00:57 crc kubenswrapper[4802]: I1004 05:00:57.733674 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" Oct 04 05:00:57 crc kubenswrapper[4802]: I1004 05:00:57.871811 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvj6z\" (UniqueName: \"kubernetes.io/projected/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-kube-api-access-bvj6z\") pod \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\" (UID: \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\") " Oct 04 05:00:57 crc kubenswrapper[4802]: I1004 05:00:57.871960 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-bundle\") pod \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\" (UID: \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\") " Oct 04 05:00:57 crc kubenswrapper[4802]: I1004 05:00:57.872041 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-util\") pod \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\" (UID: \"ad46a69d-5845-4fc3-861b-3d8ebd4106c6\") " Oct 04 05:00:57 crc kubenswrapper[4802]: I1004 05:00:57.872969 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-bundle" (OuterVolumeSpecName: "bundle") pod "ad46a69d-5845-4fc3-861b-3d8ebd4106c6" (UID: "ad46a69d-5845-4fc3-861b-3d8ebd4106c6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:00:57 crc kubenswrapper[4802]: I1004 05:00:57.877712 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-kube-api-access-bvj6z" (OuterVolumeSpecName: "kube-api-access-bvj6z") pod "ad46a69d-5845-4fc3-861b-3d8ebd4106c6" (UID: "ad46a69d-5845-4fc3-861b-3d8ebd4106c6"). InnerVolumeSpecName "kube-api-access-bvj6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:00:57 crc kubenswrapper[4802]: I1004 05:00:57.882492 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-util" (OuterVolumeSpecName: "util") pod "ad46a69d-5845-4fc3-861b-3d8ebd4106c6" (UID: "ad46a69d-5845-4fc3-861b-3d8ebd4106c6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:00:57 crc kubenswrapper[4802]: I1004 05:00:57.973140 4802 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:57 crc kubenswrapper[4802]: I1004 05:00:57.973180 4802 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-util\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:57 crc kubenswrapper[4802]: I1004 05:00:57.973190 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvj6z\" (UniqueName: \"kubernetes.io/projected/ad46a69d-5845-4fc3-861b-3d8ebd4106c6-kube-api-access-bvj6z\") on node \"crc\" DevicePath \"\"" Oct 04 05:00:58 crc kubenswrapper[4802]: I1004 05:00:58.487390 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" event={"ID":"ad46a69d-5845-4fc3-861b-3d8ebd4106c6","Type":"ContainerDied","Data":"7c3aa9c088deec9c588149e2805896179db81d3e277b277b8364a4b5ca67fe2e"} Oct 04 05:00:58 crc kubenswrapper[4802]: I1004 05:00:58.487442 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c3aa9c088deec9c588149e2805896179db81d3e277b277b8364a4b5ca67fe2e" Oct 04 05:00:58 crc kubenswrapper[4802]: I1004 05:00:58.487462 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.822143 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq"] Oct 04 05:01:09 crc kubenswrapper[4802]: E1004 05:01:09.823084 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad46a69d-5845-4fc3-861b-3d8ebd4106c6" containerName="pull" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.823102 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad46a69d-5845-4fc3-861b-3d8ebd4106c6" containerName="pull" Oct 04 05:01:09 crc kubenswrapper[4802]: E1004 05:01:09.823116 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad46a69d-5845-4fc3-861b-3d8ebd4106c6" containerName="util" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.823123 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad46a69d-5845-4fc3-861b-3d8ebd4106c6" containerName="util" Oct 04 05:01:09 crc kubenswrapper[4802]: E1004 05:01:09.823132 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad46a69d-5845-4fc3-861b-3d8ebd4106c6" containerName="extract" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.823139 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad46a69d-5845-4fc3-861b-3d8ebd4106c6" containerName="extract" Oct 04 05:01:09 crc kubenswrapper[4802]: E1004 05:01:09.823149 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64860eca-743c-423a-8ee4-a1e5fd4f667d" containerName="console" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.823156 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="64860eca-743c-423a-8ee4-a1e5fd4f667d" containerName="console" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.823284 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad46a69d-5845-4fc3-861b-3d8ebd4106c6" containerName="extract" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.823296 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="64860eca-743c-423a-8ee4-a1e5fd4f667d" containerName="console" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.823934 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.826585 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.826680 4802 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.827979 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.828134 4802 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.829312 4802 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gcd75" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.839676 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq"] Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.936252 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2glh\" (UniqueName: \"kubernetes.io/projected/993c123a-61e1-4430-8be4-e17388014589-kube-api-access-x2glh\") pod \"metallb-operator-controller-manager-68b4b95c5-hj8lq\" (UID: \"993c123a-61e1-4430-8be4-e17388014589\") " pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.936316 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/993c123a-61e1-4430-8be4-e17388014589-webhook-cert\") pod \"metallb-operator-controller-manager-68b4b95c5-hj8lq\" (UID: \"993c123a-61e1-4430-8be4-e17388014589\") " pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" Oct 04 05:01:09 crc kubenswrapper[4802]: I1004 05:01:09.936345 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/993c123a-61e1-4430-8be4-e17388014589-apiservice-cert\") pod \"metallb-operator-controller-manager-68b4b95c5-hj8lq\" (UID: \"993c123a-61e1-4430-8be4-e17388014589\") " pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.037545 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/993c123a-61e1-4430-8be4-e17388014589-webhook-cert\") pod \"metallb-operator-controller-manager-68b4b95c5-hj8lq\" (UID: \"993c123a-61e1-4430-8be4-e17388014589\") " pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.037940 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/993c123a-61e1-4430-8be4-e17388014589-apiservice-cert\") pod \"metallb-operator-controller-manager-68b4b95c5-hj8lq\" (UID: \"993c123a-61e1-4430-8be4-e17388014589\") " pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.038130 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2glh\" (UniqueName: \"kubernetes.io/projected/993c123a-61e1-4430-8be4-e17388014589-kube-api-access-x2glh\") pod \"metallb-operator-controller-manager-68b4b95c5-hj8lq\" (UID: \"993c123a-61e1-4430-8be4-e17388014589\") " pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.045535 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/993c123a-61e1-4430-8be4-e17388014589-webhook-cert\") pod \"metallb-operator-controller-manager-68b4b95c5-hj8lq\" (UID: \"993c123a-61e1-4430-8be4-e17388014589\") " pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.047159 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/993c123a-61e1-4430-8be4-e17388014589-apiservice-cert\") pod \"metallb-operator-controller-manager-68b4b95c5-hj8lq\" (UID: \"993c123a-61e1-4430-8be4-e17388014589\") " pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.063978 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2glh\" (UniqueName: \"kubernetes.io/projected/993c123a-61e1-4430-8be4-e17388014589-kube-api-access-x2glh\") pod \"metallb-operator-controller-manager-68b4b95c5-hj8lq\" (UID: \"993c123a-61e1-4430-8be4-e17388014589\") " pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.144881 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.175536 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w"] Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.176711 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.181028 4802 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.181820 4802 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.181880 4802 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-g7j5f" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.188086 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w"] Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.343531 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54dc0606-5967-4e11-892e-683ab5ba6092-webhook-cert\") pod \"metallb-operator-webhook-server-6cf665f679-9vg2w\" (UID: \"54dc0606-5967-4e11-892e-683ab5ba6092\") " pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.343632 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54dc0606-5967-4e11-892e-683ab5ba6092-apiservice-cert\") pod \"metallb-operator-webhook-server-6cf665f679-9vg2w\" (UID: \"54dc0606-5967-4e11-892e-683ab5ba6092\") " pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.343690 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hb6g\" (UniqueName: \"kubernetes.io/projected/54dc0606-5967-4e11-892e-683ab5ba6092-kube-api-access-4hb6g\") pod \"metallb-operator-webhook-server-6cf665f679-9vg2w\" (UID: \"54dc0606-5967-4e11-892e-683ab5ba6092\") " pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.445780 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54dc0606-5967-4e11-892e-683ab5ba6092-apiservice-cert\") pod \"metallb-operator-webhook-server-6cf665f679-9vg2w\" (UID: \"54dc0606-5967-4e11-892e-683ab5ba6092\") " pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.445878 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hb6g\" (UniqueName: \"kubernetes.io/projected/54dc0606-5967-4e11-892e-683ab5ba6092-kube-api-access-4hb6g\") pod \"metallb-operator-webhook-server-6cf665f679-9vg2w\" (UID: \"54dc0606-5967-4e11-892e-683ab5ba6092\") " pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.445954 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54dc0606-5967-4e11-892e-683ab5ba6092-webhook-cert\") pod \"metallb-operator-webhook-server-6cf665f679-9vg2w\" (UID: \"54dc0606-5967-4e11-892e-683ab5ba6092\") " pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.455342 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54dc0606-5967-4e11-892e-683ab5ba6092-apiservice-cert\") pod \"metallb-operator-webhook-server-6cf665f679-9vg2w\" (UID: \"54dc0606-5967-4e11-892e-683ab5ba6092\") " pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.456099 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54dc0606-5967-4e11-892e-683ab5ba6092-webhook-cert\") pod \"metallb-operator-webhook-server-6cf665f679-9vg2w\" (UID: \"54dc0606-5967-4e11-892e-683ab5ba6092\") " pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.479975 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hb6g\" (UniqueName: \"kubernetes.io/projected/54dc0606-5967-4e11-892e-683ab5ba6092-kube-api-access-4hb6g\") pod \"metallb-operator-webhook-server-6cf665f679-9vg2w\" (UID: \"54dc0606-5967-4e11-892e-683ab5ba6092\") " pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.521159 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.605337 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq"] Oct 04 05:01:10 crc kubenswrapper[4802]: W1004 05:01:10.625241 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod993c123a_61e1_4430_8be4_e17388014589.slice/crio-107c89a0918bf3f27e793eb56d2fcef057c1a585c42917abf9831599624018bf WatchSource:0}: Error finding container 107c89a0918bf3f27e793eb56d2fcef057c1a585c42917abf9831599624018bf: Status 404 returned error can't find the container with id 107c89a0918bf3f27e793eb56d2fcef057c1a585c42917abf9831599624018bf Oct 04 05:01:10 crc kubenswrapper[4802]: I1004 05:01:10.750527 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w"] Oct 04 05:01:10 crc kubenswrapper[4802]: W1004 05:01:10.759243 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54dc0606_5967_4e11_892e_683ab5ba6092.slice/crio-44e84367f9873926e3bc0e6b80fefea405d07045634935917709f870ed8e5429 WatchSource:0}: Error finding container 44e84367f9873926e3bc0e6b80fefea405d07045634935917709f870ed8e5429: Status 404 returned error can't find the container with id 44e84367f9873926e3bc0e6b80fefea405d07045634935917709f870ed8e5429 Oct 04 05:01:11 crc kubenswrapper[4802]: I1004 05:01:11.568606 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" event={"ID":"54dc0606-5967-4e11-892e-683ab5ba6092","Type":"ContainerStarted","Data":"44e84367f9873926e3bc0e6b80fefea405d07045634935917709f870ed8e5429"} Oct 04 05:01:11 crc kubenswrapper[4802]: I1004 05:01:11.572697 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" event={"ID":"993c123a-61e1-4430-8be4-e17388014589","Type":"ContainerStarted","Data":"107c89a0918bf3f27e793eb56d2fcef057c1a585c42917abf9831599624018bf"} Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.165358 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fs98p"] Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.167531 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.179933 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fs98p"] Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.253427 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7aad184-95f3-4135-beb0-886f0a9de7ec-catalog-content\") pod \"community-operators-fs98p\" (UID: \"a7aad184-95f3-4135-beb0-886f0a9de7ec\") " pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.253797 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7aad184-95f3-4135-beb0-886f0a9de7ec-utilities\") pod \"community-operators-fs98p\" (UID: \"a7aad184-95f3-4135-beb0-886f0a9de7ec\") " pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.253821 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvnhc\" (UniqueName: \"kubernetes.io/projected/a7aad184-95f3-4135-beb0-886f0a9de7ec-kube-api-access-mvnhc\") pod \"community-operators-fs98p\" (UID: \"a7aad184-95f3-4135-beb0-886f0a9de7ec\") " pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.355531 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7aad184-95f3-4135-beb0-886f0a9de7ec-catalog-content\") pod \"community-operators-fs98p\" (UID: \"a7aad184-95f3-4135-beb0-886f0a9de7ec\") " pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.355583 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7aad184-95f3-4135-beb0-886f0a9de7ec-utilities\") pod \"community-operators-fs98p\" (UID: \"a7aad184-95f3-4135-beb0-886f0a9de7ec\") " pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.355617 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvnhc\" (UniqueName: \"kubernetes.io/projected/a7aad184-95f3-4135-beb0-886f0a9de7ec-kube-api-access-mvnhc\") pod \"community-operators-fs98p\" (UID: \"a7aad184-95f3-4135-beb0-886f0a9de7ec\") " pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.356297 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7aad184-95f3-4135-beb0-886f0a9de7ec-catalog-content\") pod \"community-operators-fs98p\" (UID: \"a7aad184-95f3-4135-beb0-886f0a9de7ec\") " pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.356315 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7aad184-95f3-4135-beb0-886f0a9de7ec-utilities\") pod \"community-operators-fs98p\" (UID: \"a7aad184-95f3-4135-beb0-886f0a9de7ec\") " pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.405966 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvnhc\" (UniqueName: \"kubernetes.io/projected/a7aad184-95f3-4135-beb0-886f0a9de7ec-kube-api-access-mvnhc\") pod \"community-operators-fs98p\" (UID: \"a7aad184-95f3-4135-beb0-886f0a9de7ec\") " pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.497518 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.623060 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" event={"ID":"993c123a-61e1-4430-8be4-e17388014589","Type":"ContainerStarted","Data":"23038cfcb424ef62795ba4fceb4bde443defdeea06df877090cbd667821c8719"} Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.623464 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.626174 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" event={"ID":"54dc0606-5967-4e11-892e-683ab5ba6092","Type":"ContainerStarted","Data":"37fb8345772b993d47dba1875761eda84f490b710843cd4a55abe3d07106bea8"} Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.626600 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.663671 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" podStartSLOduration=2.141980717 podStartE2EDuration="7.663586517s" podCreationTimestamp="2025-10-04 05:01:09 +0000 UTC" firstStartedPulling="2025-10-04 05:01:10.629327795 +0000 UTC m=+913.037328420" lastFinishedPulling="2025-10-04 05:01:16.150933595 +0000 UTC m=+918.558934220" observedRunningTime="2025-10-04 05:01:16.653275292 +0000 UTC m=+919.061275927" watchObservedRunningTime="2025-10-04 05:01:16.663586517 +0000 UTC m=+919.071587142" Oct 04 05:01:16 crc kubenswrapper[4802]: I1004 05:01:16.685852 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" podStartSLOduration=1.269700452 podStartE2EDuration="6.685832264s" podCreationTimestamp="2025-10-04 05:01:10 +0000 UTC" firstStartedPulling="2025-10-04 05:01:10.763308599 +0000 UTC m=+913.171309224" lastFinishedPulling="2025-10-04 05:01:16.179440411 +0000 UTC m=+918.587441036" observedRunningTime="2025-10-04 05:01:16.682151618 +0000 UTC m=+919.090152263" watchObservedRunningTime="2025-10-04 05:01:16.685832264 +0000 UTC m=+919.093832889" Oct 04 05:01:17 crc kubenswrapper[4802]: I1004 05:01:17.029983 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fs98p"] Oct 04 05:01:17 crc kubenswrapper[4802]: I1004 05:01:17.646267 4802 generic.go:334] "Generic (PLEG): container finished" podID="a7aad184-95f3-4135-beb0-886f0a9de7ec" containerID="de9fbabc0267737d5022592e7c34266a2c0ce9630d1a8395e17e05adad57cf2d" exitCode=0 Oct 04 05:01:17 crc kubenswrapper[4802]: I1004 05:01:17.646400 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs98p" event={"ID":"a7aad184-95f3-4135-beb0-886f0a9de7ec","Type":"ContainerDied","Data":"de9fbabc0267737d5022592e7c34266a2c0ce9630d1a8395e17e05adad57cf2d"} Oct 04 05:01:17 crc kubenswrapper[4802]: I1004 05:01:17.646690 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs98p" event={"ID":"a7aad184-95f3-4135-beb0-886f0a9de7ec","Type":"ContainerStarted","Data":"8d22ae64155989ebf098d59b379cb626b24419bd1c4e876934e48d2c842b87b0"} Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.163487 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7m7lg"] Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.165068 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.175158 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m7lg"] Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.280506 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skvvl\" (UniqueName: \"kubernetes.io/projected/a23cd9e8-50ca-4f46-9001-46b22c36baff-kube-api-access-skvvl\") pod \"redhat-marketplace-7m7lg\" (UID: \"a23cd9e8-50ca-4f46-9001-46b22c36baff\") " pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.280816 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23cd9e8-50ca-4f46-9001-46b22c36baff-utilities\") pod \"redhat-marketplace-7m7lg\" (UID: \"a23cd9e8-50ca-4f46-9001-46b22c36baff\") " pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.280897 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23cd9e8-50ca-4f46-9001-46b22c36baff-catalog-content\") pod \"redhat-marketplace-7m7lg\" (UID: \"a23cd9e8-50ca-4f46-9001-46b22c36baff\") " pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.381717 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skvvl\" (UniqueName: \"kubernetes.io/projected/a23cd9e8-50ca-4f46-9001-46b22c36baff-kube-api-access-skvvl\") pod \"redhat-marketplace-7m7lg\" (UID: \"a23cd9e8-50ca-4f46-9001-46b22c36baff\") " pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.381828 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23cd9e8-50ca-4f46-9001-46b22c36baff-utilities\") pod \"redhat-marketplace-7m7lg\" (UID: \"a23cd9e8-50ca-4f46-9001-46b22c36baff\") " pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.381856 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23cd9e8-50ca-4f46-9001-46b22c36baff-catalog-content\") pod \"redhat-marketplace-7m7lg\" (UID: \"a23cd9e8-50ca-4f46-9001-46b22c36baff\") " pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.382289 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23cd9e8-50ca-4f46-9001-46b22c36baff-catalog-content\") pod \"redhat-marketplace-7m7lg\" (UID: \"a23cd9e8-50ca-4f46-9001-46b22c36baff\") " pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.382492 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23cd9e8-50ca-4f46-9001-46b22c36baff-utilities\") pod \"redhat-marketplace-7m7lg\" (UID: \"a23cd9e8-50ca-4f46-9001-46b22c36baff\") " pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.418973 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skvvl\" (UniqueName: \"kubernetes.io/projected/a23cd9e8-50ca-4f46-9001-46b22c36baff-kube-api-access-skvvl\") pod \"redhat-marketplace-7m7lg\" (UID: \"a23cd9e8-50ca-4f46-9001-46b22c36baff\") " pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.483407 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.668782 4802 generic.go:334] "Generic (PLEG): container finished" podID="a7aad184-95f3-4135-beb0-886f0a9de7ec" containerID="f3f9a911462eb19b93bde1837efc0a2cdaca13626ab6cab3142a9f0c7f742b6f" exitCode=0 Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.668828 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs98p" event={"ID":"a7aad184-95f3-4135-beb0-886f0a9de7ec","Type":"ContainerDied","Data":"f3f9a911462eb19b93bde1837efc0a2cdaca13626ab6cab3142a9f0c7f742b6f"} Oct 04 05:01:18 crc kubenswrapper[4802]: I1004 05:01:18.720889 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m7lg"] Oct 04 05:01:18 crc kubenswrapper[4802]: W1004 05:01:18.728879 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda23cd9e8_50ca_4f46_9001_46b22c36baff.slice/crio-928986466a770126ca777582b70d0d8abb200fb14ac916e019fd7bac699a0116 WatchSource:0}: Error finding container 928986466a770126ca777582b70d0d8abb200fb14ac916e019fd7bac699a0116: Status 404 returned error can't find the container with id 928986466a770126ca777582b70d0d8abb200fb14ac916e019fd7bac699a0116 Oct 04 05:01:18 crc kubenswrapper[4802]: E1004 05:01:18.958318 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda23cd9e8_50ca_4f46_9001_46b22c36baff.slice/crio-906e64d12719eff539ae99098d9ea09ed34ea4dcdb9f0862e05aad82146d74c9.scope\": RecentStats: unable to find data in memory cache]" Oct 04 05:01:19 crc kubenswrapper[4802]: I1004 05:01:19.678279 4802 generic.go:334] "Generic (PLEG): container finished" podID="a23cd9e8-50ca-4f46-9001-46b22c36baff" containerID="906e64d12719eff539ae99098d9ea09ed34ea4dcdb9f0862e05aad82146d74c9" exitCode=0 Oct 04 05:01:19 crc kubenswrapper[4802]: I1004 05:01:19.678384 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m7lg" event={"ID":"a23cd9e8-50ca-4f46-9001-46b22c36baff","Type":"ContainerDied","Data":"906e64d12719eff539ae99098d9ea09ed34ea4dcdb9f0862e05aad82146d74c9"} Oct 04 05:01:19 crc kubenswrapper[4802]: I1004 05:01:19.678724 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m7lg" event={"ID":"a23cd9e8-50ca-4f46-9001-46b22c36baff","Type":"ContainerStarted","Data":"928986466a770126ca777582b70d0d8abb200fb14ac916e019fd7bac699a0116"} Oct 04 05:01:19 crc kubenswrapper[4802]: I1004 05:01:19.682636 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs98p" event={"ID":"a7aad184-95f3-4135-beb0-886f0a9de7ec","Type":"ContainerStarted","Data":"012684ed487a3450f9f6f5dcf8a3a96bfd75016d25e08474cef9f0aecd747a8d"} Oct 04 05:01:19 crc kubenswrapper[4802]: I1004 05:01:19.721750 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fs98p" podStartSLOduration=2.259230443 podStartE2EDuration="3.721731147s" podCreationTimestamp="2025-10-04 05:01:16 +0000 UTC" firstStartedPulling="2025-10-04 05:01:17.649935805 +0000 UTC m=+920.057936440" lastFinishedPulling="2025-10-04 05:01:19.112436519 +0000 UTC m=+921.520437144" observedRunningTime="2025-10-04 05:01:19.719354489 +0000 UTC m=+922.127355104" watchObservedRunningTime="2025-10-04 05:01:19.721731147 +0000 UTC m=+922.129731772" Oct 04 05:01:20 crc kubenswrapper[4802]: I1004 05:01:20.690580 4802 generic.go:334] "Generic (PLEG): container finished" podID="a23cd9e8-50ca-4f46-9001-46b22c36baff" containerID="8f4ca7f4e0f0eac2ff173ead27c5e122ed75988232bd39157216c893d42c1c18" exitCode=0 Oct 04 05:01:20 crc kubenswrapper[4802]: I1004 05:01:20.690634 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m7lg" event={"ID":"a23cd9e8-50ca-4f46-9001-46b22c36baff","Type":"ContainerDied","Data":"8f4ca7f4e0f0eac2ff173ead27c5e122ed75988232bd39157216c893d42c1c18"} Oct 04 05:01:21 crc kubenswrapper[4802]: I1004 05:01:21.698463 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m7lg" event={"ID":"a23cd9e8-50ca-4f46-9001-46b22c36baff","Type":"ContainerStarted","Data":"1241fa3995d181938915751aaeb502aa132c0c3cbc0061b3be218eb200352f73"} Oct 04 05:01:21 crc kubenswrapper[4802]: I1004 05:01:21.719708 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7m7lg" podStartSLOduration=2.077096977 podStartE2EDuration="3.719681756s" podCreationTimestamp="2025-10-04 05:01:18 +0000 UTC" firstStartedPulling="2025-10-04 05:01:19.680238869 +0000 UTC m=+922.088239494" lastFinishedPulling="2025-10-04 05:01:21.322823658 +0000 UTC m=+923.730824273" observedRunningTime="2025-10-04 05:01:21.716961408 +0000 UTC m=+924.124962053" watchObservedRunningTime="2025-10-04 05:01:21.719681756 +0000 UTC m=+924.127682401" Oct 04 05:01:26 crc kubenswrapper[4802]: I1004 05:01:26.498219 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:26 crc kubenswrapper[4802]: I1004 05:01:26.498898 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:26 crc kubenswrapper[4802]: I1004 05:01:26.541390 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:26 crc kubenswrapper[4802]: I1004 05:01:26.781775 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:28 crc kubenswrapper[4802]: I1004 05:01:28.483553 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:28 crc kubenswrapper[4802]: I1004 05:01:28.484049 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:28 crc kubenswrapper[4802]: I1004 05:01:28.523885 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:28 crc kubenswrapper[4802]: I1004 05:01:28.787064 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:29 crc kubenswrapper[4802]: I1004 05:01:29.162766 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fs98p"] Oct 04 05:01:29 crc kubenswrapper[4802]: I1004 05:01:29.163060 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fs98p" podUID="a7aad184-95f3-4135-beb0-886f0a9de7ec" containerName="registry-server" containerID="cri-o://012684ed487a3450f9f6f5dcf8a3a96bfd75016d25e08474cef9f0aecd747a8d" gracePeriod=2 Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.526938 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6cf665f679-9vg2w" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.679266 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.760382 4802 generic.go:334] "Generic (PLEG): container finished" podID="a7aad184-95f3-4135-beb0-886f0a9de7ec" containerID="012684ed487a3450f9f6f5dcf8a3a96bfd75016d25e08474cef9f0aecd747a8d" exitCode=0 Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.760439 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs98p" event={"ID":"a7aad184-95f3-4135-beb0-886f0a9de7ec","Type":"ContainerDied","Data":"012684ed487a3450f9f6f5dcf8a3a96bfd75016d25e08474cef9f0aecd747a8d"} Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.760467 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs98p" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.760485 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs98p" event={"ID":"a7aad184-95f3-4135-beb0-886f0a9de7ec","Type":"ContainerDied","Data":"8d22ae64155989ebf098d59b379cb626b24419bd1c4e876934e48d2c842b87b0"} Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.760519 4802 scope.go:117] "RemoveContainer" containerID="012684ed487a3450f9f6f5dcf8a3a96bfd75016d25e08474cef9f0aecd747a8d" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.778172 4802 scope.go:117] "RemoveContainer" containerID="f3f9a911462eb19b93bde1837efc0a2cdaca13626ab6cab3142a9f0c7f742b6f" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.778589 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvnhc\" (UniqueName: \"kubernetes.io/projected/a7aad184-95f3-4135-beb0-886f0a9de7ec-kube-api-access-mvnhc\") pod \"a7aad184-95f3-4135-beb0-886f0a9de7ec\" (UID: \"a7aad184-95f3-4135-beb0-886f0a9de7ec\") " Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.778739 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7aad184-95f3-4135-beb0-886f0a9de7ec-utilities\") pod \"a7aad184-95f3-4135-beb0-886f0a9de7ec\" (UID: \"a7aad184-95f3-4135-beb0-886f0a9de7ec\") " Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.778781 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7aad184-95f3-4135-beb0-886f0a9de7ec-catalog-content\") pod \"a7aad184-95f3-4135-beb0-886f0a9de7ec\" (UID: \"a7aad184-95f3-4135-beb0-886f0a9de7ec\") " Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.780621 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7aad184-95f3-4135-beb0-886f0a9de7ec-utilities" (OuterVolumeSpecName: "utilities") pod "a7aad184-95f3-4135-beb0-886f0a9de7ec" (UID: "a7aad184-95f3-4135-beb0-886f0a9de7ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.789098 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7aad184-95f3-4135-beb0-886f0a9de7ec-kube-api-access-mvnhc" (OuterVolumeSpecName: "kube-api-access-mvnhc") pod "a7aad184-95f3-4135-beb0-886f0a9de7ec" (UID: "a7aad184-95f3-4135-beb0-886f0a9de7ec"). InnerVolumeSpecName "kube-api-access-mvnhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.831919 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7aad184-95f3-4135-beb0-886f0a9de7ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7aad184-95f3-4135-beb0-886f0a9de7ec" (UID: "a7aad184-95f3-4135-beb0-886f0a9de7ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.832892 4802 scope.go:117] "RemoveContainer" containerID="de9fbabc0267737d5022592e7c34266a2c0ce9630d1a8395e17e05adad57cf2d" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.875877 4802 scope.go:117] "RemoveContainer" containerID="012684ed487a3450f9f6f5dcf8a3a96bfd75016d25e08474cef9f0aecd747a8d" Oct 04 05:01:30 crc kubenswrapper[4802]: E1004 05:01:30.878353 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"012684ed487a3450f9f6f5dcf8a3a96bfd75016d25e08474cef9f0aecd747a8d\": container with ID starting with 012684ed487a3450f9f6f5dcf8a3a96bfd75016d25e08474cef9f0aecd747a8d not found: ID does not exist" containerID="012684ed487a3450f9f6f5dcf8a3a96bfd75016d25e08474cef9f0aecd747a8d" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.878409 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"012684ed487a3450f9f6f5dcf8a3a96bfd75016d25e08474cef9f0aecd747a8d"} err="failed to get container status \"012684ed487a3450f9f6f5dcf8a3a96bfd75016d25e08474cef9f0aecd747a8d\": rpc error: code = NotFound desc = could not find container \"012684ed487a3450f9f6f5dcf8a3a96bfd75016d25e08474cef9f0aecd747a8d\": container with ID starting with 012684ed487a3450f9f6f5dcf8a3a96bfd75016d25e08474cef9f0aecd747a8d not found: ID does not exist" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.878448 4802 scope.go:117] "RemoveContainer" containerID="f3f9a911462eb19b93bde1837efc0a2cdaca13626ab6cab3142a9f0c7f742b6f" Oct 04 05:01:30 crc kubenswrapper[4802]: E1004 05:01:30.878908 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3f9a911462eb19b93bde1837efc0a2cdaca13626ab6cab3142a9f0c7f742b6f\": container with ID starting with f3f9a911462eb19b93bde1837efc0a2cdaca13626ab6cab3142a9f0c7f742b6f not found: ID does not exist" containerID="f3f9a911462eb19b93bde1837efc0a2cdaca13626ab6cab3142a9f0c7f742b6f" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.878992 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3f9a911462eb19b93bde1837efc0a2cdaca13626ab6cab3142a9f0c7f742b6f"} err="failed to get container status \"f3f9a911462eb19b93bde1837efc0a2cdaca13626ab6cab3142a9f0c7f742b6f\": rpc error: code = NotFound desc = could not find container \"f3f9a911462eb19b93bde1837efc0a2cdaca13626ab6cab3142a9f0c7f742b6f\": container with ID starting with f3f9a911462eb19b93bde1837efc0a2cdaca13626ab6cab3142a9f0c7f742b6f not found: ID does not exist" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.879021 4802 scope.go:117] "RemoveContainer" containerID="de9fbabc0267737d5022592e7c34266a2c0ce9630d1a8395e17e05adad57cf2d" Oct 04 05:01:30 crc kubenswrapper[4802]: E1004 05:01:30.879344 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de9fbabc0267737d5022592e7c34266a2c0ce9630d1a8395e17e05adad57cf2d\": container with ID starting with de9fbabc0267737d5022592e7c34266a2c0ce9630d1a8395e17e05adad57cf2d not found: ID does not exist" containerID="de9fbabc0267737d5022592e7c34266a2c0ce9630d1a8395e17e05adad57cf2d" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.879386 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de9fbabc0267737d5022592e7c34266a2c0ce9630d1a8395e17e05adad57cf2d"} err="failed to get container status \"de9fbabc0267737d5022592e7c34266a2c0ce9630d1a8395e17e05adad57cf2d\": rpc error: code = NotFound desc = could not find container \"de9fbabc0267737d5022592e7c34266a2c0ce9630d1a8395e17e05adad57cf2d\": container with ID starting with de9fbabc0267737d5022592e7c34266a2c0ce9630d1a8395e17e05adad57cf2d not found: ID does not exist" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.881245 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7aad184-95f3-4135-beb0-886f0a9de7ec-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.881269 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7aad184-95f3-4135-beb0-886f0a9de7ec-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:01:30 crc kubenswrapper[4802]: I1004 05:01:30.881280 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvnhc\" (UniqueName: \"kubernetes.io/projected/a7aad184-95f3-4135-beb0-886f0a9de7ec-kube-api-access-mvnhc\") on node \"crc\" DevicePath \"\"" Oct 04 05:01:31 crc kubenswrapper[4802]: I1004 05:01:31.088791 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fs98p"] Oct 04 05:01:31 crc kubenswrapper[4802]: I1004 05:01:31.093746 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fs98p"] Oct 04 05:01:31 crc kubenswrapper[4802]: I1004 05:01:31.960009 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m7lg"] Oct 04 05:01:31 crc kubenswrapper[4802]: I1004 05:01:31.960331 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7m7lg" podUID="a23cd9e8-50ca-4f46-9001-46b22c36baff" containerName="registry-server" containerID="cri-o://1241fa3995d181938915751aaeb502aa132c0c3cbc0061b3be218eb200352f73" gracePeriod=2 Oct 04 05:01:32 crc kubenswrapper[4802]: I1004 05:01:32.368452 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7aad184-95f3-4135-beb0-886f0a9de7ec" path="/var/lib/kubelet/pods/a7aad184-95f3-4135-beb0-886f0a9de7ec/volumes" Oct 04 05:01:34 crc kubenswrapper[4802]: I1004 05:01:34.789152 4802 generic.go:334] "Generic (PLEG): container finished" podID="a23cd9e8-50ca-4f46-9001-46b22c36baff" containerID="1241fa3995d181938915751aaeb502aa132c0c3cbc0061b3be218eb200352f73" exitCode=0 Oct 04 05:01:34 crc kubenswrapper[4802]: I1004 05:01:34.789213 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m7lg" event={"ID":"a23cd9e8-50ca-4f46-9001-46b22c36baff","Type":"ContainerDied","Data":"1241fa3995d181938915751aaeb502aa132c0c3cbc0061b3be218eb200352f73"} Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.043688 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.168907 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23cd9e8-50ca-4f46-9001-46b22c36baff-catalog-content\") pod \"a23cd9e8-50ca-4f46-9001-46b22c36baff\" (UID: \"a23cd9e8-50ca-4f46-9001-46b22c36baff\") " Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.169008 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skvvl\" (UniqueName: \"kubernetes.io/projected/a23cd9e8-50ca-4f46-9001-46b22c36baff-kube-api-access-skvvl\") pod \"a23cd9e8-50ca-4f46-9001-46b22c36baff\" (UID: \"a23cd9e8-50ca-4f46-9001-46b22c36baff\") " Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.169174 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23cd9e8-50ca-4f46-9001-46b22c36baff-utilities\") pod \"a23cd9e8-50ca-4f46-9001-46b22c36baff\" (UID: \"a23cd9e8-50ca-4f46-9001-46b22c36baff\") " Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.170307 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a23cd9e8-50ca-4f46-9001-46b22c36baff-utilities" (OuterVolumeSpecName: "utilities") pod "a23cd9e8-50ca-4f46-9001-46b22c36baff" (UID: "a23cd9e8-50ca-4f46-9001-46b22c36baff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.185004 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a23cd9e8-50ca-4f46-9001-46b22c36baff-kube-api-access-skvvl" (OuterVolumeSpecName: "kube-api-access-skvvl") pod "a23cd9e8-50ca-4f46-9001-46b22c36baff" (UID: "a23cd9e8-50ca-4f46-9001-46b22c36baff"). InnerVolumeSpecName "kube-api-access-skvvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.186666 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a23cd9e8-50ca-4f46-9001-46b22c36baff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a23cd9e8-50ca-4f46-9001-46b22c36baff" (UID: "a23cd9e8-50ca-4f46-9001-46b22c36baff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.271222 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23cd9e8-50ca-4f46-9001-46b22c36baff-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.271268 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23cd9e8-50ca-4f46-9001-46b22c36baff-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.271279 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skvvl\" (UniqueName: \"kubernetes.io/projected/a23cd9e8-50ca-4f46-9001-46b22c36baff-kube-api-access-skvvl\") on node \"crc\" DevicePath \"\"" Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.810360 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m7lg" event={"ID":"a23cd9e8-50ca-4f46-9001-46b22c36baff","Type":"ContainerDied","Data":"928986466a770126ca777582b70d0d8abb200fb14ac916e019fd7bac699a0116"} Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.810441 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m7lg" Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.810475 4802 scope.go:117] "RemoveContainer" containerID="1241fa3995d181938915751aaeb502aa132c0c3cbc0061b3be218eb200352f73" Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.832812 4802 scope.go:117] "RemoveContainer" containerID="8f4ca7f4e0f0eac2ff173ead27c5e122ed75988232bd39157216c893d42c1c18" Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.841061 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m7lg"] Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.844689 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m7lg"] Oct 04 05:01:37 crc kubenswrapper[4802]: I1004 05:01:37.867607 4802 scope.go:117] "RemoveContainer" containerID="906e64d12719eff539ae99098d9ea09ed34ea4dcdb9f0862e05aad82146d74c9" Oct 04 05:01:38 crc kubenswrapper[4802]: I1004 05:01:38.368497 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a23cd9e8-50ca-4f46-9001-46b22c36baff" path="/var/lib/kubelet/pods/a23cd9e8-50ca-4f46-9001-46b22c36baff/volumes" Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.847720 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m7nz4"] Oct 04 05:01:46 crc kubenswrapper[4802]: E1004 05:01:46.849015 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23cd9e8-50ca-4f46-9001-46b22c36baff" containerName="extract-utilities" Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.849038 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23cd9e8-50ca-4f46-9001-46b22c36baff" containerName="extract-utilities" Oct 04 05:01:46 crc kubenswrapper[4802]: E1004 05:01:46.849057 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7aad184-95f3-4135-beb0-886f0a9de7ec" containerName="extract-utilities" Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.849066 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7aad184-95f3-4135-beb0-886f0a9de7ec" containerName="extract-utilities" Oct 04 05:01:46 crc kubenswrapper[4802]: E1004 05:01:46.849079 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7aad184-95f3-4135-beb0-886f0a9de7ec" containerName="registry-server" Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.849087 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7aad184-95f3-4135-beb0-886f0a9de7ec" containerName="registry-server" Oct 04 05:01:46 crc kubenswrapper[4802]: E1004 05:01:46.849097 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23cd9e8-50ca-4f46-9001-46b22c36baff" containerName="extract-content" Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.849123 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23cd9e8-50ca-4f46-9001-46b22c36baff" containerName="extract-content" Oct 04 05:01:46 crc kubenswrapper[4802]: E1004 05:01:46.849139 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7aad184-95f3-4135-beb0-886f0a9de7ec" containerName="extract-content" Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.849147 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7aad184-95f3-4135-beb0-886f0a9de7ec" containerName="extract-content" Oct 04 05:01:46 crc kubenswrapper[4802]: E1004 05:01:46.849162 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23cd9e8-50ca-4f46-9001-46b22c36baff" containerName="registry-server" Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.849170 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23cd9e8-50ca-4f46-9001-46b22c36baff" containerName="registry-server" Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.849328 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7aad184-95f3-4135-beb0-886f0a9de7ec" containerName="registry-server" Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.849341 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23cd9e8-50ca-4f46-9001-46b22c36baff" containerName="registry-server" Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.850318 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.860336 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m7nz4"] Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.910872 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98csk\" (UniqueName: \"kubernetes.io/projected/12af870c-8150-4d03-a4a0-a1c6858d009a-kube-api-access-98csk\") pod \"certified-operators-m7nz4\" (UID: \"12af870c-8150-4d03-a4a0-a1c6858d009a\") " pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.910937 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12af870c-8150-4d03-a4a0-a1c6858d009a-catalog-content\") pod \"certified-operators-m7nz4\" (UID: \"12af870c-8150-4d03-a4a0-a1c6858d009a\") " pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:46 crc kubenswrapper[4802]: I1004 05:01:46.911035 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12af870c-8150-4d03-a4a0-a1c6858d009a-utilities\") pod \"certified-operators-m7nz4\" (UID: \"12af870c-8150-4d03-a4a0-a1c6858d009a\") " pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:47 crc kubenswrapper[4802]: I1004 05:01:47.012136 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12af870c-8150-4d03-a4a0-a1c6858d009a-utilities\") pod \"certified-operators-m7nz4\" (UID: \"12af870c-8150-4d03-a4a0-a1c6858d009a\") " pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:47 crc kubenswrapper[4802]: I1004 05:01:47.012208 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98csk\" (UniqueName: \"kubernetes.io/projected/12af870c-8150-4d03-a4a0-a1c6858d009a-kube-api-access-98csk\") pod \"certified-operators-m7nz4\" (UID: \"12af870c-8150-4d03-a4a0-a1c6858d009a\") " pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:47 crc kubenswrapper[4802]: I1004 05:01:47.012230 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12af870c-8150-4d03-a4a0-a1c6858d009a-catalog-content\") pod \"certified-operators-m7nz4\" (UID: \"12af870c-8150-4d03-a4a0-a1c6858d009a\") " pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:47 crc kubenswrapper[4802]: I1004 05:01:47.012842 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12af870c-8150-4d03-a4a0-a1c6858d009a-utilities\") pod \"certified-operators-m7nz4\" (UID: \"12af870c-8150-4d03-a4a0-a1c6858d009a\") " pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:47 crc kubenswrapper[4802]: I1004 05:01:47.012861 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12af870c-8150-4d03-a4a0-a1c6858d009a-catalog-content\") pod \"certified-operators-m7nz4\" (UID: \"12af870c-8150-4d03-a4a0-a1c6858d009a\") " pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:47 crc kubenswrapper[4802]: I1004 05:01:47.037050 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98csk\" (UniqueName: \"kubernetes.io/projected/12af870c-8150-4d03-a4a0-a1c6858d009a-kube-api-access-98csk\") pod \"certified-operators-m7nz4\" (UID: \"12af870c-8150-4d03-a4a0-a1c6858d009a\") " pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:47 crc kubenswrapper[4802]: I1004 05:01:47.169798 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:47 crc kubenswrapper[4802]: I1004 05:01:47.640106 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m7nz4"] Oct 04 05:01:47 crc kubenswrapper[4802]: I1004 05:01:47.878345 4802 generic.go:334] "Generic (PLEG): container finished" podID="12af870c-8150-4d03-a4a0-a1c6858d009a" containerID="8a8512050f1b75b6fc97fca4f79b36edcb6dc84b25edd7226332fc0f11dfc8e5" exitCode=0 Oct 04 05:01:47 crc kubenswrapper[4802]: I1004 05:01:47.878419 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7nz4" event={"ID":"12af870c-8150-4d03-a4a0-a1c6858d009a","Type":"ContainerDied","Data":"8a8512050f1b75b6fc97fca4f79b36edcb6dc84b25edd7226332fc0f11dfc8e5"} Oct 04 05:01:47 crc kubenswrapper[4802]: I1004 05:01:47.878463 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7nz4" event={"ID":"12af870c-8150-4d03-a4a0-a1c6858d009a","Type":"ContainerStarted","Data":"406dfed2143b3805f1700218d60bff84cb85aec9f2968509cc3f8efbedbd048f"} Oct 04 05:01:48 crc kubenswrapper[4802]: I1004 05:01:48.886871 4802 generic.go:334] "Generic (PLEG): container finished" podID="12af870c-8150-4d03-a4a0-a1c6858d009a" containerID="062114aef5a60e969871b84e50fa7c50796a3a8a2614115717583781c21fe8c0" exitCode=0 Oct 04 05:01:48 crc kubenswrapper[4802]: I1004 05:01:48.886994 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7nz4" event={"ID":"12af870c-8150-4d03-a4a0-a1c6858d009a","Type":"ContainerDied","Data":"062114aef5a60e969871b84e50fa7c50796a3a8a2614115717583781c21fe8c0"} Oct 04 05:01:49 crc kubenswrapper[4802]: I1004 05:01:49.897472 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7nz4" event={"ID":"12af870c-8150-4d03-a4a0-a1c6858d009a","Type":"ContainerStarted","Data":"89a3e344ce4579fe0723c5b9b508df8512ff364fbc1a1cd33eff0c10d27c49e1"} Oct 04 05:01:49 crc kubenswrapper[4802]: I1004 05:01:49.917212 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m7nz4" podStartSLOduration=2.219392453 podStartE2EDuration="3.917183812s" podCreationTimestamp="2025-10-04 05:01:46 +0000 UTC" firstStartedPulling="2025-10-04 05:01:47.88035708 +0000 UTC m=+950.288357705" lastFinishedPulling="2025-10-04 05:01:49.578148439 +0000 UTC m=+951.986149064" observedRunningTime="2025-10-04 05:01:49.912538669 +0000 UTC m=+952.320539314" watchObservedRunningTime="2025-10-04 05:01:49.917183812 +0000 UTC m=+952.325184457" Oct 04 05:01:50 crc kubenswrapper[4802]: I1004 05:01:50.149249 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-68b4b95c5-hj8lq" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.006625 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jrt4r"] Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.009478 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.011789 4802 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-w5shb" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.012469 4802 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.014327 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.047361 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c"] Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.048476 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.052789 4802 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.066768 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/485db5a3-22ab-44c2-8f05-7cbd0e5054be-metrics\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.066857 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/485db5a3-22ab-44c2-8f05-7cbd0e5054be-frr-sockets\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.066903 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/485db5a3-22ab-44c2-8f05-7cbd0e5054be-frr-startup\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.066931 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mblpx\" (UniqueName: \"kubernetes.io/projected/485db5a3-22ab-44c2-8f05-7cbd0e5054be-kube-api-access-mblpx\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.066966 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/485db5a3-22ab-44c2-8f05-7cbd0e5054be-metrics-certs\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.067256 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/485db5a3-22ab-44c2-8f05-7cbd0e5054be-reloader\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.067368 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/485db5a3-22ab-44c2-8f05-7cbd0e5054be-frr-conf\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.067907 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c"] Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.122232 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-h7l7h"] Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.123432 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.126787 4802 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.127169 4802 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.127259 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.127335 4802 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-nb78k" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.139810 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-9bbxh"] Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.141023 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-9bbxh" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.143590 4802 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.159376 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-9bbxh"] Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.168492 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/485db5a3-22ab-44c2-8f05-7cbd0e5054be-reloader\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.169019 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df9efd5f-32a9-4858-930d-84d1fad7f160-cert\") pod \"frr-k8s-webhook-server-64bf5d555-pbl8c\" (UID: \"df9efd5f-32a9-4858-930d-84d1fad7f160\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.168968 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/485db5a3-22ab-44c2-8f05-7cbd0e5054be-reloader\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.169122 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/485db5a3-22ab-44c2-8f05-7cbd0e5054be-frr-conf\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.169269 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/485db5a3-22ab-44c2-8f05-7cbd0e5054be-metrics\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.169337 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzl74\" (UniqueName: \"kubernetes.io/projected/df9efd5f-32a9-4858-930d-84d1fad7f160-kube-api-access-wzl74\") pod \"frr-k8s-webhook-server-64bf5d555-pbl8c\" (UID: \"df9efd5f-32a9-4858-930d-84d1fad7f160\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.169386 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/485db5a3-22ab-44c2-8f05-7cbd0e5054be-frr-sockets\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.169445 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/485db5a3-22ab-44c2-8f05-7cbd0e5054be-frr-startup\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.169482 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mblpx\" (UniqueName: \"kubernetes.io/projected/485db5a3-22ab-44c2-8f05-7cbd0e5054be-kube-api-access-mblpx\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.169513 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/485db5a3-22ab-44c2-8f05-7cbd0e5054be-metrics-certs\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.169521 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/485db5a3-22ab-44c2-8f05-7cbd0e5054be-metrics\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.169888 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/485db5a3-22ab-44c2-8f05-7cbd0e5054be-frr-sockets\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: E1004 05:01:51.170033 4802 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Oct 04 05:01:51 crc kubenswrapper[4802]: E1004 05:01:51.170091 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/485db5a3-22ab-44c2-8f05-7cbd0e5054be-metrics-certs podName:485db5a3-22ab-44c2-8f05-7cbd0e5054be nodeName:}" failed. No retries permitted until 2025-10-04 05:01:51.670071457 +0000 UTC m=+954.078072082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/485db5a3-22ab-44c2-8f05-7cbd0e5054be-metrics-certs") pod "frr-k8s-jrt4r" (UID: "485db5a3-22ab-44c2-8f05-7cbd0e5054be") : secret "frr-k8s-certs-secret" not found Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.170455 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/485db5a3-22ab-44c2-8f05-7cbd0e5054be-frr-startup\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.170604 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/485db5a3-22ab-44c2-8f05-7cbd0e5054be-frr-conf\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.205520 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mblpx\" (UniqueName: \"kubernetes.io/projected/485db5a3-22ab-44c2-8f05-7cbd0e5054be-kube-api-access-mblpx\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.271327 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plrnt\" (UniqueName: \"kubernetes.io/projected/ecbf7b6a-2a3e-44c3-8516-dcd1ec840842-kube-api-access-plrnt\") pod \"controller-68d546b9d8-9bbxh\" (UID: \"ecbf7b6a-2a3e-44c3-8516-dcd1ec840842\") " pod="metallb-system/controller-68d546b9d8-9bbxh" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.271380 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ecbf7b6a-2a3e-44c3-8516-dcd1ec840842-metrics-certs\") pod \"controller-68d546b9d8-9bbxh\" (UID: \"ecbf7b6a-2a3e-44c3-8516-dcd1ec840842\") " pod="metallb-system/controller-68d546b9d8-9bbxh" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.271405 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecbf7b6a-2a3e-44c3-8516-dcd1ec840842-cert\") pod \"controller-68d546b9d8-9bbxh\" (UID: \"ecbf7b6a-2a3e-44c3-8516-dcd1ec840842\") " pod="metallb-system/controller-68d546b9d8-9bbxh" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.271609 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df9efd5f-32a9-4858-930d-84d1fad7f160-cert\") pod \"frr-k8s-webhook-server-64bf5d555-pbl8c\" (UID: \"df9efd5f-32a9-4858-930d-84d1fad7f160\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.271679 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/08354f64-a424-482b-86d1-49d082f168be-metallb-excludel2\") pod \"speaker-h7l7h\" (UID: \"08354f64-a424-482b-86d1-49d082f168be\") " pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: E1004 05:01:51.271764 4802 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Oct 04 05:01:51 crc kubenswrapper[4802]: E1004 05:01:51.271817 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df9efd5f-32a9-4858-930d-84d1fad7f160-cert podName:df9efd5f-32a9-4858-930d-84d1fad7f160 nodeName:}" failed. No retries permitted until 2025-10-04 05:01:51.771799798 +0000 UTC m=+954.179800423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df9efd5f-32a9-4858-930d-84d1fad7f160-cert") pod "frr-k8s-webhook-server-64bf5d555-pbl8c" (UID: "df9efd5f-32a9-4858-930d-84d1fad7f160") : secret "frr-k8s-webhook-server-cert" not found Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.271806 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08354f64-a424-482b-86d1-49d082f168be-metrics-certs\") pod \"speaker-h7l7h\" (UID: \"08354f64-a424-482b-86d1-49d082f168be\") " pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.271869 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzl74\" (UniqueName: \"kubernetes.io/projected/df9efd5f-32a9-4858-930d-84d1fad7f160-kube-api-access-wzl74\") pod \"frr-k8s-webhook-server-64bf5d555-pbl8c\" (UID: \"df9efd5f-32a9-4858-930d-84d1fad7f160\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.271899 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbgj\" (UniqueName: \"kubernetes.io/projected/08354f64-a424-482b-86d1-49d082f168be-kube-api-access-rfbgj\") pod \"speaker-h7l7h\" (UID: \"08354f64-a424-482b-86d1-49d082f168be\") " pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.271917 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/08354f64-a424-482b-86d1-49d082f168be-memberlist\") pod \"speaker-h7l7h\" (UID: \"08354f64-a424-482b-86d1-49d082f168be\") " pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.291967 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzl74\" (UniqueName: \"kubernetes.io/projected/df9efd5f-32a9-4858-930d-84d1fad7f160-kube-api-access-wzl74\") pod \"frr-k8s-webhook-server-64bf5d555-pbl8c\" (UID: \"df9efd5f-32a9-4858-930d-84d1fad7f160\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.373371 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecbf7b6a-2a3e-44c3-8516-dcd1ec840842-cert\") pod \"controller-68d546b9d8-9bbxh\" (UID: \"ecbf7b6a-2a3e-44c3-8516-dcd1ec840842\") " pod="metallb-system/controller-68d546b9d8-9bbxh" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.373474 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/08354f64-a424-482b-86d1-49d082f168be-metallb-excludel2\") pod \"speaker-h7l7h\" (UID: \"08354f64-a424-482b-86d1-49d082f168be\") " pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.373499 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08354f64-a424-482b-86d1-49d082f168be-metrics-certs\") pod \"speaker-h7l7h\" (UID: \"08354f64-a424-482b-86d1-49d082f168be\") " pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.373545 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfbgj\" (UniqueName: \"kubernetes.io/projected/08354f64-a424-482b-86d1-49d082f168be-kube-api-access-rfbgj\") pod \"speaker-h7l7h\" (UID: \"08354f64-a424-482b-86d1-49d082f168be\") " pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.373566 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/08354f64-a424-482b-86d1-49d082f168be-memberlist\") pod \"speaker-h7l7h\" (UID: \"08354f64-a424-482b-86d1-49d082f168be\") " pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.373593 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plrnt\" (UniqueName: \"kubernetes.io/projected/ecbf7b6a-2a3e-44c3-8516-dcd1ec840842-kube-api-access-plrnt\") pod \"controller-68d546b9d8-9bbxh\" (UID: \"ecbf7b6a-2a3e-44c3-8516-dcd1ec840842\") " pod="metallb-system/controller-68d546b9d8-9bbxh" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.373613 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ecbf7b6a-2a3e-44c3-8516-dcd1ec840842-metrics-certs\") pod \"controller-68d546b9d8-9bbxh\" (UID: \"ecbf7b6a-2a3e-44c3-8516-dcd1ec840842\") " pod="metallb-system/controller-68d546b9d8-9bbxh" Oct 04 05:01:51 crc kubenswrapper[4802]: E1004 05:01:51.374101 4802 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 04 05:01:51 crc kubenswrapper[4802]: E1004 05:01:51.374215 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08354f64-a424-482b-86d1-49d082f168be-memberlist podName:08354f64-a424-482b-86d1-49d082f168be nodeName:}" failed. No retries permitted until 2025-10-04 05:01:51.874193159 +0000 UTC m=+954.282193784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/08354f64-a424-482b-86d1-49d082f168be-memberlist") pod "speaker-h7l7h" (UID: "08354f64-a424-482b-86d1-49d082f168be") : secret "metallb-memberlist" not found Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.375331 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/08354f64-a424-482b-86d1-49d082f168be-metallb-excludel2\") pod \"speaker-h7l7h\" (UID: \"08354f64-a424-482b-86d1-49d082f168be\") " pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.376916 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecbf7b6a-2a3e-44c3-8516-dcd1ec840842-cert\") pod \"controller-68d546b9d8-9bbxh\" (UID: \"ecbf7b6a-2a3e-44c3-8516-dcd1ec840842\") " pod="metallb-system/controller-68d546b9d8-9bbxh" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.377218 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08354f64-a424-482b-86d1-49d082f168be-metrics-certs\") pod \"speaker-h7l7h\" (UID: \"08354f64-a424-482b-86d1-49d082f168be\") " pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.381276 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ecbf7b6a-2a3e-44c3-8516-dcd1ec840842-metrics-certs\") pod \"controller-68d546b9d8-9bbxh\" (UID: \"ecbf7b6a-2a3e-44c3-8516-dcd1ec840842\") " pod="metallb-system/controller-68d546b9d8-9bbxh" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.392987 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfbgj\" (UniqueName: \"kubernetes.io/projected/08354f64-a424-482b-86d1-49d082f168be-kube-api-access-rfbgj\") pod \"speaker-h7l7h\" (UID: \"08354f64-a424-482b-86d1-49d082f168be\") " pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.395056 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plrnt\" (UniqueName: \"kubernetes.io/projected/ecbf7b6a-2a3e-44c3-8516-dcd1ec840842-kube-api-access-plrnt\") pod \"controller-68d546b9d8-9bbxh\" (UID: \"ecbf7b6a-2a3e-44c3-8516-dcd1ec840842\") " pod="metallb-system/controller-68d546b9d8-9bbxh" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.460593 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-9bbxh" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.643621 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-9bbxh"] Oct 04 05:01:51 crc kubenswrapper[4802]: W1004 05:01:51.649895 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecbf7b6a_2a3e_44c3_8516_dcd1ec840842.slice/crio-b95cf06749e53aa8bed44a15bf3649e9981c4786545c154ee310f57c91309454 WatchSource:0}: Error finding container b95cf06749e53aa8bed44a15bf3649e9981c4786545c154ee310f57c91309454: Status 404 returned error can't find the container with id b95cf06749e53aa8bed44a15bf3649e9981c4786545c154ee310f57c91309454 Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.677286 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/485db5a3-22ab-44c2-8f05-7cbd0e5054be-metrics-certs\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.680918 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/485db5a3-22ab-44c2-8f05-7cbd0e5054be-metrics-certs\") pod \"frr-k8s-jrt4r\" (UID: \"485db5a3-22ab-44c2-8f05-7cbd0e5054be\") " pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.778628 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df9efd5f-32a9-4858-930d-84d1fad7f160-cert\") pod \"frr-k8s-webhook-server-64bf5d555-pbl8c\" (UID: \"df9efd5f-32a9-4858-930d-84d1fad7f160\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.784048 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df9efd5f-32a9-4858-930d-84d1fad7f160-cert\") pod \"frr-k8s-webhook-server-64bf5d555-pbl8c\" (UID: \"df9efd5f-32a9-4858-930d-84d1fad7f160\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.879972 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/08354f64-a424-482b-86d1-49d082f168be-memberlist\") pod \"speaker-h7l7h\" (UID: \"08354f64-a424-482b-86d1-49d082f168be\") " pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.882850 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/08354f64-a424-482b-86d1-49d082f168be-memberlist\") pod \"speaker-h7l7h\" (UID: \"08354f64-a424-482b-86d1-49d082f168be\") " pod="metallb-system/speaker-h7l7h" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.909098 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-9bbxh" event={"ID":"ecbf7b6a-2a3e-44c3-8516-dcd1ec840842","Type":"ContainerStarted","Data":"815e3592895eb5382e80940f79ea464b747ef487a9728d242448a5ba042168cd"} Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.909157 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-9bbxh" event={"ID":"ecbf7b6a-2a3e-44c3-8516-dcd1ec840842","Type":"ContainerStarted","Data":"2dbd02f0bd723bef1959be7136c32b17e467fdb9611af258f0ea0bc38aecdb22"} Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.909171 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-9bbxh" event={"ID":"ecbf7b6a-2a3e-44c3-8516-dcd1ec840842","Type":"ContainerStarted","Data":"b95cf06749e53aa8bed44a15bf3649e9981c4786545c154ee310f57c91309454"} Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.909864 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-9bbxh" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.927152 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.937246 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-9bbxh" podStartSLOduration=0.937223202 podStartE2EDuration="937.223202ms" podCreationTimestamp="2025-10-04 05:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:01:51.936150341 +0000 UTC m=+954.344150966" watchObservedRunningTime="2025-10-04 05:01:51.937223202 +0000 UTC m=+954.345223837" Oct 04 05:01:51 crc kubenswrapper[4802]: I1004 05:01:51.961798 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" Oct 04 05:01:52 crc kubenswrapper[4802]: I1004 05:01:52.039278 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-h7l7h" Oct 04 05:01:52 crc kubenswrapper[4802]: I1004 05:01:52.392898 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c"] Oct 04 05:01:52 crc kubenswrapper[4802]: W1004 05:01:52.404946 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf9efd5f_32a9_4858_930d_84d1fad7f160.slice/crio-53f62705cd92aa3d206f70a4b8bdeebba42cd15884db5dfa12d17ff41c538126 WatchSource:0}: Error finding container 53f62705cd92aa3d206f70a4b8bdeebba42cd15884db5dfa12d17ff41c538126: Status 404 returned error can't find the container with id 53f62705cd92aa3d206f70a4b8bdeebba42cd15884db5dfa12d17ff41c538126 Oct 04 05:01:52 crc kubenswrapper[4802]: I1004 05:01:52.915429 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrt4r" event={"ID":"485db5a3-22ab-44c2-8f05-7cbd0e5054be","Type":"ContainerStarted","Data":"c4153f518ef5c3c9a6ab4138d3ce2fccbf3fb3db944e00a641cecc04353b4a94"} Oct 04 05:01:52 crc kubenswrapper[4802]: I1004 05:01:52.917554 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" event={"ID":"df9efd5f-32a9-4858-930d-84d1fad7f160","Type":"ContainerStarted","Data":"53f62705cd92aa3d206f70a4b8bdeebba42cd15884db5dfa12d17ff41c538126"} Oct 04 05:01:52 crc kubenswrapper[4802]: I1004 05:01:52.919143 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-h7l7h" event={"ID":"08354f64-a424-482b-86d1-49d082f168be","Type":"ContainerStarted","Data":"e77fbcb6476fdedda3eec733e5573171e778c2d9275a1d4db4d968ca54f5639b"} Oct 04 05:01:52 crc kubenswrapper[4802]: I1004 05:01:52.919214 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-h7l7h" event={"ID":"08354f64-a424-482b-86d1-49d082f168be","Type":"ContainerStarted","Data":"ecc8426b8b2d9a26985b1c1388952cc65537f35e466992942ec3c63c84b72b25"} Oct 04 05:01:52 crc kubenswrapper[4802]: I1004 05:01:52.919229 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-h7l7h" event={"ID":"08354f64-a424-482b-86d1-49d082f168be","Type":"ContainerStarted","Data":"9bcd56a19c8bc7d437393d9a33ccb586deb2c2d7b689ca9f9e3649905f22de9f"} Oct 04 05:01:52 crc kubenswrapper[4802]: I1004 05:01:52.919424 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-h7l7h" Oct 04 05:01:52 crc kubenswrapper[4802]: I1004 05:01:52.950486 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-h7l7h" podStartSLOduration=1.95045698 podStartE2EDuration="1.95045698s" podCreationTimestamp="2025-10-04 05:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:01:52.950035858 +0000 UTC m=+955.358036483" watchObservedRunningTime="2025-10-04 05:01:52.95045698 +0000 UTC m=+955.358457605" Oct 04 05:01:57 crc kubenswrapper[4802]: I1004 05:01:57.171004 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:57 crc kubenswrapper[4802]: I1004 05:01:57.171395 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:57 crc kubenswrapper[4802]: I1004 05:01:57.217150 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:58 crc kubenswrapper[4802]: I1004 05:01:58.016565 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:01:58 crc kubenswrapper[4802]: I1004 05:01:58.112596 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m7nz4"] Oct 04 05:01:59 crc kubenswrapper[4802]: I1004 05:01:59.972200 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m7nz4" podUID="12af870c-8150-4d03-a4a0-a1c6858d009a" containerName="registry-server" containerID="cri-o://89a3e344ce4579fe0723c5b9b508df8512ff364fbc1a1cd33eff0c10d27c49e1" gracePeriod=2 Oct 04 05:02:00 crc kubenswrapper[4802]: I1004 05:02:00.991044 4802 generic.go:334] "Generic (PLEG): container finished" podID="12af870c-8150-4d03-a4a0-a1c6858d009a" containerID="89a3e344ce4579fe0723c5b9b508df8512ff364fbc1a1cd33eff0c10d27c49e1" exitCode=0 Oct 04 05:02:00 crc kubenswrapper[4802]: I1004 05:02:00.991104 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7nz4" event={"ID":"12af870c-8150-4d03-a4a0-a1c6858d009a","Type":"ContainerDied","Data":"89a3e344ce4579fe0723c5b9b508df8512ff364fbc1a1cd33eff0c10d27c49e1"} Oct 04 05:02:01 crc kubenswrapper[4802]: I1004 05:02:01.172613 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:02:01 crc kubenswrapper[4802]: I1004 05:02:01.325406 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12af870c-8150-4d03-a4a0-a1c6858d009a-catalog-content\") pod \"12af870c-8150-4d03-a4a0-a1c6858d009a\" (UID: \"12af870c-8150-4d03-a4a0-a1c6858d009a\") " Oct 04 05:02:01 crc kubenswrapper[4802]: I1004 05:02:01.326035 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98csk\" (UniqueName: \"kubernetes.io/projected/12af870c-8150-4d03-a4a0-a1c6858d009a-kube-api-access-98csk\") pod \"12af870c-8150-4d03-a4a0-a1c6858d009a\" (UID: \"12af870c-8150-4d03-a4a0-a1c6858d009a\") " Oct 04 05:02:01 crc kubenswrapper[4802]: I1004 05:02:01.327490 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12af870c-8150-4d03-a4a0-a1c6858d009a-utilities\") pod \"12af870c-8150-4d03-a4a0-a1c6858d009a\" (UID: \"12af870c-8150-4d03-a4a0-a1c6858d009a\") " Oct 04 05:02:01 crc kubenswrapper[4802]: I1004 05:02:01.328810 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12af870c-8150-4d03-a4a0-a1c6858d009a-utilities" (OuterVolumeSpecName: "utilities") pod "12af870c-8150-4d03-a4a0-a1c6858d009a" (UID: "12af870c-8150-4d03-a4a0-a1c6858d009a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:02:01 crc kubenswrapper[4802]: I1004 05:02:01.334009 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12af870c-8150-4d03-a4a0-a1c6858d009a-kube-api-access-98csk" (OuterVolumeSpecName: "kube-api-access-98csk") pod "12af870c-8150-4d03-a4a0-a1c6858d009a" (UID: "12af870c-8150-4d03-a4a0-a1c6858d009a"). InnerVolumeSpecName "kube-api-access-98csk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:02:01 crc kubenswrapper[4802]: I1004 05:02:01.387811 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12af870c-8150-4d03-a4a0-a1c6858d009a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12af870c-8150-4d03-a4a0-a1c6858d009a" (UID: "12af870c-8150-4d03-a4a0-a1c6858d009a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:02:01 crc kubenswrapper[4802]: I1004 05:02:01.430514 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12af870c-8150-4d03-a4a0-a1c6858d009a-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:02:01 crc kubenswrapper[4802]: I1004 05:02:01.430544 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12af870c-8150-4d03-a4a0-a1c6858d009a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:02:01 crc kubenswrapper[4802]: I1004 05:02:01.430556 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98csk\" (UniqueName: \"kubernetes.io/projected/12af870c-8150-4d03-a4a0-a1c6858d009a-kube-api-access-98csk\") on node \"crc\" DevicePath \"\"" Oct 04 05:02:01 crc kubenswrapper[4802]: I1004 05:02:01.468286 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-9bbxh" Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.001427 4802 generic.go:334] "Generic (PLEG): container finished" podID="485db5a3-22ab-44c2-8f05-7cbd0e5054be" containerID="0e5e57bf30bf802c9e9f267a08f7d914cee434e4405f98225d94148c54ac8c02" exitCode=0 Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.001525 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrt4r" event={"ID":"485db5a3-22ab-44c2-8f05-7cbd0e5054be","Type":"ContainerDied","Data":"0e5e57bf30bf802c9e9f267a08f7d914cee434e4405f98225d94148c54ac8c02"} Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.004914 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" event={"ID":"df9efd5f-32a9-4858-930d-84d1fad7f160","Type":"ContainerStarted","Data":"c328eb019d086c3de7b8fe5aac72a8d8e56cd55a115893a52fee5c79e681eae2"} Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.004996 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.019083 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7nz4" event={"ID":"12af870c-8150-4d03-a4a0-a1c6858d009a","Type":"ContainerDied","Data":"406dfed2143b3805f1700218d60bff84cb85aec9f2968509cc3f8efbedbd048f"} Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.019164 4802 scope.go:117] "RemoveContainer" containerID="89a3e344ce4579fe0723c5b9b508df8512ff364fbc1a1cd33eff0c10d27c49e1" Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.019203 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7nz4" Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.044394 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-h7l7h" Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.050549 4802 scope.go:117] "RemoveContainer" containerID="062114aef5a60e969871b84e50fa7c50796a3a8a2614115717583781c21fe8c0" Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.059166 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" podStartSLOduration=2.463832461 podStartE2EDuration="11.059141978s" podCreationTimestamp="2025-10-04 05:01:51 +0000 UTC" firstStartedPulling="2025-10-04 05:01:52.408250312 +0000 UTC m=+954.816250937" lastFinishedPulling="2025-10-04 05:02:01.003559829 +0000 UTC m=+963.411560454" observedRunningTime="2025-10-04 05:02:02.058755557 +0000 UTC m=+964.466756182" watchObservedRunningTime="2025-10-04 05:02:02.059141978 +0000 UTC m=+964.467142603" Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.094517 4802 scope.go:117] "RemoveContainer" containerID="8a8512050f1b75b6fc97fca4f79b36edcb6dc84b25edd7226332fc0f11dfc8e5" Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.105375 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m7nz4"] Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.113464 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m7nz4"] Oct 04 05:02:02 crc kubenswrapper[4802]: I1004 05:02:02.369474 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12af870c-8150-4d03-a4a0-a1c6858d009a" path="/var/lib/kubelet/pods/12af870c-8150-4d03-a4a0-a1c6858d009a/volumes" Oct 04 05:02:03 crc kubenswrapper[4802]: I1004 05:02:03.027688 4802 generic.go:334] "Generic (PLEG): container finished" podID="485db5a3-22ab-44c2-8f05-7cbd0e5054be" containerID="d955d3786c6a40e976d1decf52a2a9de1a0b40b44ab1f07296aec8cd05e1baf0" exitCode=0 Oct 04 05:02:03 crc kubenswrapper[4802]: I1004 05:02:03.027889 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrt4r" event={"ID":"485db5a3-22ab-44c2-8f05-7cbd0e5054be","Type":"ContainerDied","Data":"d955d3786c6a40e976d1decf52a2a9de1a0b40b44ab1f07296aec8cd05e1baf0"} Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.041588 4802 generic.go:334] "Generic (PLEG): container finished" podID="485db5a3-22ab-44c2-8f05-7cbd0e5054be" containerID="703f92bdb70874524867647a8425cd2ad81999626d72f348bdf705b87a69609e" exitCode=0 Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.041668 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrt4r" event={"ID":"485db5a3-22ab-44c2-8f05-7cbd0e5054be","Type":"ContainerDied","Data":"703f92bdb70874524867647a8425cd2ad81999626d72f348bdf705b87a69609e"} Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.266244 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-n8hmn"] Oct 04 05:02:05 crc kubenswrapper[4802]: E1004 05:02:05.266628 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12af870c-8150-4d03-a4a0-a1c6858d009a" containerName="extract-utilities" Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.266666 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="12af870c-8150-4d03-a4a0-a1c6858d009a" containerName="extract-utilities" Oct 04 05:02:05 crc kubenswrapper[4802]: E1004 05:02:05.266679 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12af870c-8150-4d03-a4a0-a1c6858d009a" containerName="extract-content" Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.266689 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="12af870c-8150-4d03-a4a0-a1c6858d009a" containerName="extract-content" Oct 04 05:02:05 crc kubenswrapper[4802]: E1004 05:02:05.266709 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12af870c-8150-4d03-a4a0-a1c6858d009a" containerName="registry-server" Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.266717 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="12af870c-8150-4d03-a4a0-a1c6858d009a" containerName="registry-server" Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.266846 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="12af870c-8150-4d03-a4a0-a1c6858d009a" containerName="registry-server" Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.267378 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n8hmn" Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.269774 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fzfwb" Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.270882 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.271193 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.293291 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n8hmn"] Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.387037 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84s4p\" (UniqueName: \"kubernetes.io/projected/782eb848-bcd8-4007-bbd6-1d108635c1fa-kube-api-access-84s4p\") pod \"openstack-operator-index-n8hmn\" (UID: \"782eb848-bcd8-4007-bbd6-1d108635c1fa\") " pod="openstack-operators/openstack-operator-index-n8hmn" Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.490316 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84s4p\" (UniqueName: \"kubernetes.io/projected/782eb848-bcd8-4007-bbd6-1d108635c1fa-kube-api-access-84s4p\") pod \"openstack-operator-index-n8hmn\" (UID: \"782eb848-bcd8-4007-bbd6-1d108635c1fa\") " pod="openstack-operators/openstack-operator-index-n8hmn" Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.508062 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84s4p\" (UniqueName: \"kubernetes.io/projected/782eb848-bcd8-4007-bbd6-1d108635c1fa-kube-api-access-84s4p\") pod \"openstack-operator-index-n8hmn\" (UID: \"782eb848-bcd8-4007-bbd6-1d108635c1fa\") " pod="openstack-operators/openstack-operator-index-n8hmn" Oct 04 05:02:05 crc kubenswrapper[4802]: I1004 05:02:05.582464 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n8hmn" Oct 04 05:02:06 crc kubenswrapper[4802]: I1004 05:02:06.019266 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n8hmn"] Oct 04 05:02:06 crc kubenswrapper[4802]: I1004 05:02:06.053621 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrt4r" event={"ID":"485db5a3-22ab-44c2-8f05-7cbd0e5054be","Type":"ContainerStarted","Data":"d8ccc22d3b84c0ca628c74efd5d1c17a28d6da3ec30230d5b847a8e1da639aff"} Oct 04 05:02:06 crc kubenswrapper[4802]: I1004 05:02:06.053781 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrt4r" event={"ID":"485db5a3-22ab-44c2-8f05-7cbd0e5054be","Type":"ContainerStarted","Data":"a554e12cae136a0ce220cc5d6de5ea71cc1e83cb0d9aacb8992fe0325c57a927"} Oct 04 05:02:06 crc kubenswrapper[4802]: I1004 05:02:06.057349 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n8hmn" event={"ID":"782eb848-bcd8-4007-bbd6-1d108635c1fa","Type":"ContainerStarted","Data":"6e04eb3d389ebc10001f9bbb5ac9b7b046e8e1033d3a3c193fbb62804084351b"} Oct 04 05:02:07 crc kubenswrapper[4802]: I1004 05:02:07.070364 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrt4r" event={"ID":"485db5a3-22ab-44c2-8f05-7cbd0e5054be","Type":"ContainerStarted","Data":"f3985ae3a53fd0129e0819d4d2a83c64ec4e8fda7f79ecccd36bc613e03bcd9b"} Oct 04 05:02:07 crc kubenswrapper[4802]: I1004 05:02:07.070755 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrt4r" event={"ID":"485db5a3-22ab-44c2-8f05-7cbd0e5054be","Type":"ContainerStarted","Data":"1e21874124346a889308d6fc12bbfc59ea02cb854b8bd2950c993e0bde38741f"} Oct 04 05:02:07 crc kubenswrapper[4802]: I1004 05:02:07.070770 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrt4r" event={"ID":"485db5a3-22ab-44c2-8f05-7cbd0e5054be","Type":"ContainerStarted","Data":"bdfd96379fbfdf55a6083045c90e2e6e748559c8194afb75a4bb1ba9310fc55e"} Oct 04 05:02:08 crc kubenswrapper[4802]: I1004 05:02:08.081457 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrt4r" event={"ID":"485db5a3-22ab-44c2-8f05-7cbd0e5054be","Type":"ContainerStarted","Data":"223edc91af49fa3ef3d66038c1eaa4c082101cbba7bd15743ee8e1ef536e14dc"} Oct 04 05:02:08 crc kubenswrapper[4802]: I1004 05:02:08.081655 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:02:08 crc kubenswrapper[4802]: I1004 05:02:08.107659 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jrt4r" podStartSLOduration=9.185969161 podStartE2EDuration="18.107614207s" podCreationTimestamp="2025-10-04 05:01:50 +0000 UTC" firstStartedPulling="2025-10-04 05:01:52.026122956 +0000 UTC m=+954.434123581" lastFinishedPulling="2025-10-04 05:02:00.947767992 +0000 UTC m=+963.355768627" observedRunningTime="2025-10-04 05:02:08.104235031 +0000 UTC m=+970.512235676" watchObservedRunningTime="2025-10-04 05:02:08.107614207 +0000 UTC m=+970.515614832" Oct 04 05:02:08 crc kubenswrapper[4802]: I1004 05:02:08.442224 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n8hmn"] Oct 04 05:02:09 crc kubenswrapper[4802]: I1004 05:02:09.050751 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pf97s"] Oct 04 05:02:09 crc kubenswrapper[4802]: I1004 05:02:09.052015 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pf97s" Oct 04 05:02:09 crc kubenswrapper[4802]: I1004 05:02:09.062058 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pf97s"] Oct 04 05:02:09 crc kubenswrapper[4802]: I1004 05:02:09.144363 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhgmv\" (UniqueName: \"kubernetes.io/projected/1f691215-13a7-4ac7-9399-430acb279349-kube-api-access-hhgmv\") pod \"openstack-operator-index-pf97s\" (UID: \"1f691215-13a7-4ac7-9399-430acb279349\") " pod="openstack-operators/openstack-operator-index-pf97s" Oct 04 05:02:09 crc kubenswrapper[4802]: I1004 05:02:09.246209 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhgmv\" (UniqueName: \"kubernetes.io/projected/1f691215-13a7-4ac7-9399-430acb279349-kube-api-access-hhgmv\") pod \"openstack-operator-index-pf97s\" (UID: \"1f691215-13a7-4ac7-9399-430acb279349\") " pod="openstack-operators/openstack-operator-index-pf97s" Oct 04 05:02:09 crc kubenswrapper[4802]: I1004 05:02:09.269876 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhgmv\" (UniqueName: \"kubernetes.io/projected/1f691215-13a7-4ac7-9399-430acb279349-kube-api-access-hhgmv\") pod \"openstack-operator-index-pf97s\" (UID: \"1f691215-13a7-4ac7-9399-430acb279349\") " pod="openstack-operators/openstack-operator-index-pf97s" Oct 04 05:02:09 crc kubenswrapper[4802]: I1004 05:02:09.379519 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pf97s" Oct 04 05:02:10 crc kubenswrapper[4802]: I1004 05:02:10.525733 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pf97s"] Oct 04 05:02:10 crc kubenswrapper[4802]: W1004 05:02:10.719782 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f691215_13a7_4ac7_9399_430acb279349.slice/crio-f666422cdbdcd2e43ec1ef052909e28d17c3e577e4e74c14286093d461d92139 WatchSource:0}: Error finding container f666422cdbdcd2e43ec1ef052909e28d17c3e577e4e74c14286093d461d92139: Status 404 returned error can't find the container with id f666422cdbdcd2e43ec1ef052909e28d17c3e577e4e74c14286093d461d92139 Oct 04 05:02:11 crc kubenswrapper[4802]: I1004 05:02:11.103186 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pf97s" event={"ID":"1f691215-13a7-4ac7-9399-430acb279349","Type":"ContainerStarted","Data":"f666422cdbdcd2e43ec1ef052909e28d17c3e577e4e74c14286093d461d92139"} Oct 04 05:02:11 crc kubenswrapper[4802]: I1004 05:02:11.927981 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:02:11 crc kubenswrapper[4802]: I1004 05:02:11.970197 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-pbl8c" Oct 04 05:02:11 crc kubenswrapper[4802]: I1004 05:02:11.992908 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:02:12 crc kubenswrapper[4802]: I1004 05:02:12.112100 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n8hmn" event={"ID":"782eb848-bcd8-4007-bbd6-1d108635c1fa","Type":"ContainerStarted","Data":"1dc50a76a5f6daac2f86ccf39083a3193e718712188b09e8ecb5c86f06e62f19"} Oct 04 05:02:12 crc kubenswrapper[4802]: I1004 05:02:12.112257 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-n8hmn" podUID="782eb848-bcd8-4007-bbd6-1d108635c1fa" containerName="registry-server" containerID="cri-o://1dc50a76a5f6daac2f86ccf39083a3193e718712188b09e8ecb5c86f06e62f19" gracePeriod=2 Oct 04 05:02:12 crc kubenswrapper[4802]: I1004 05:02:12.115976 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pf97s" event={"ID":"1f691215-13a7-4ac7-9399-430acb279349","Type":"ContainerStarted","Data":"4ece393ed60b41a62ad5943a3d39b7375adbb7e7ade8cc84593ca76c50b39481"} Oct 04 05:02:12 crc kubenswrapper[4802]: I1004 05:02:12.167994 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-n8hmn" podStartSLOduration=1.560710338 podStartE2EDuration="7.167967994s" podCreationTimestamp="2025-10-04 05:02:05 +0000 UTC" firstStartedPulling="2025-10-04 05:02:06.033676423 +0000 UTC m=+968.441677058" lastFinishedPulling="2025-10-04 05:02:11.640934049 +0000 UTC m=+974.048934714" observedRunningTime="2025-10-04 05:02:12.13473342 +0000 UTC m=+974.542734055" watchObservedRunningTime="2025-10-04 05:02:12.167967994 +0000 UTC m=+974.575968649" Oct 04 05:02:12 crc kubenswrapper[4802]: I1004 05:02:12.183664 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pf97s" podStartSLOduration=2.284121329 podStartE2EDuration="3.183619863s" podCreationTimestamp="2025-10-04 05:02:09 +0000 UTC" firstStartedPulling="2025-10-04 05:02:10.722305106 +0000 UTC m=+973.130305731" lastFinishedPulling="2025-10-04 05:02:11.62180364 +0000 UTC m=+974.029804265" observedRunningTime="2025-10-04 05:02:12.176823228 +0000 UTC m=+974.584823853" watchObservedRunningTime="2025-10-04 05:02:12.183619863 +0000 UTC m=+974.591620488" Oct 04 05:02:12 crc kubenswrapper[4802]: I1004 05:02:12.475470 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n8hmn" Oct 04 05:02:12 crc kubenswrapper[4802]: I1004 05:02:12.594189 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84s4p\" (UniqueName: \"kubernetes.io/projected/782eb848-bcd8-4007-bbd6-1d108635c1fa-kube-api-access-84s4p\") pod \"782eb848-bcd8-4007-bbd6-1d108635c1fa\" (UID: \"782eb848-bcd8-4007-bbd6-1d108635c1fa\") " Oct 04 05:02:12 crc kubenswrapper[4802]: I1004 05:02:12.602870 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782eb848-bcd8-4007-bbd6-1d108635c1fa-kube-api-access-84s4p" (OuterVolumeSpecName: "kube-api-access-84s4p") pod "782eb848-bcd8-4007-bbd6-1d108635c1fa" (UID: "782eb848-bcd8-4007-bbd6-1d108635c1fa"). InnerVolumeSpecName "kube-api-access-84s4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:02:12 crc kubenswrapper[4802]: I1004 05:02:12.695674 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84s4p\" (UniqueName: \"kubernetes.io/projected/782eb848-bcd8-4007-bbd6-1d108635c1fa-kube-api-access-84s4p\") on node \"crc\" DevicePath \"\"" Oct 04 05:02:13 crc kubenswrapper[4802]: I1004 05:02:13.125088 4802 generic.go:334] "Generic (PLEG): container finished" podID="782eb848-bcd8-4007-bbd6-1d108635c1fa" containerID="1dc50a76a5f6daac2f86ccf39083a3193e718712188b09e8ecb5c86f06e62f19" exitCode=0 Oct 04 05:02:13 crc kubenswrapper[4802]: I1004 05:02:13.125162 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n8hmn" Oct 04 05:02:13 crc kubenswrapper[4802]: I1004 05:02:13.125170 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n8hmn" event={"ID":"782eb848-bcd8-4007-bbd6-1d108635c1fa","Type":"ContainerDied","Data":"1dc50a76a5f6daac2f86ccf39083a3193e718712188b09e8ecb5c86f06e62f19"} Oct 04 05:02:13 crc kubenswrapper[4802]: I1004 05:02:13.125508 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n8hmn" event={"ID":"782eb848-bcd8-4007-bbd6-1d108635c1fa","Type":"ContainerDied","Data":"6e04eb3d389ebc10001f9bbb5ac9b7b046e8e1033d3a3c193fbb62804084351b"} Oct 04 05:02:13 crc kubenswrapper[4802]: I1004 05:02:13.125543 4802 scope.go:117] "RemoveContainer" containerID="1dc50a76a5f6daac2f86ccf39083a3193e718712188b09e8ecb5c86f06e62f19" Oct 04 05:02:13 crc kubenswrapper[4802]: I1004 05:02:13.156602 4802 scope.go:117] "RemoveContainer" containerID="1dc50a76a5f6daac2f86ccf39083a3193e718712188b09e8ecb5c86f06e62f19" Oct 04 05:02:13 crc kubenswrapper[4802]: E1004 05:02:13.160271 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc50a76a5f6daac2f86ccf39083a3193e718712188b09e8ecb5c86f06e62f19\": container with ID starting with 1dc50a76a5f6daac2f86ccf39083a3193e718712188b09e8ecb5c86f06e62f19 not found: ID does not exist" containerID="1dc50a76a5f6daac2f86ccf39083a3193e718712188b09e8ecb5c86f06e62f19" Oct 04 05:02:13 crc kubenswrapper[4802]: I1004 05:02:13.160343 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc50a76a5f6daac2f86ccf39083a3193e718712188b09e8ecb5c86f06e62f19"} err="failed to get container status \"1dc50a76a5f6daac2f86ccf39083a3193e718712188b09e8ecb5c86f06e62f19\": rpc error: code = NotFound desc = could not find container \"1dc50a76a5f6daac2f86ccf39083a3193e718712188b09e8ecb5c86f06e62f19\": container with ID starting with 1dc50a76a5f6daac2f86ccf39083a3193e718712188b09e8ecb5c86f06e62f19 not found: ID does not exist" Oct 04 05:02:13 crc kubenswrapper[4802]: I1004 05:02:13.166746 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n8hmn"] Oct 04 05:02:13 crc kubenswrapper[4802]: I1004 05:02:13.175829 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-n8hmn"] Oct 04 05:02:14 crc kubenswrapper[4802]: I1004 05:02:14.370165 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782eb848-bcd8-4007-bbd6-1d108635c1fa" path="/var/lib/kubelet/pods/782eb848-bcd8-4007-bbd6-1d108635c1fa/volumes" Oct 04 05:02:19 crc kubenswrapper[4802]: I1004 05:02:19.380196 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-pf97s" Oct 04 05:02:19 crc kubenswrapper[4802]: I1004 05:02:19.380554 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-pf97s" Oct 04 05:02:19 crc kubenswrapper[4802]: I1004 05:02:19.406859 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-pf97s" Oct 04 05:02:20 crc kubenswrapper[4802]: I1004 05:02:20.233460 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-pf97s" Oct 04 05:02:21 crc kubenswrapper[4802]: I1004 05:02:21.931363 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jrt4r" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.106165 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq"] Oct 04 05:02:26 crc kubenswrapper[4802]: E1004 05:02:26.106730 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782eb848-bcd8-4007-bbd6-1d108635c1fa" containerName="registry-server" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.106743 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="782eb848-bcd8-4007-bbd6-1d108635c1fa" containerName="registry-server" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.106845 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="782eb848-bcd8-4007-bbd6-1d108635c1fa" containerName="registry-server" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.107703 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.110634 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bddss" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.123438 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq"] Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.197852 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-bundle\") pod \"8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq\" (UID: \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\") " pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.197933 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlsxx\" (UniqueName: \"kubernetes.io/projected/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-kube-api-access-tlsxx\") pod \"8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq\" (UID: \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\") " pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.197981 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-util\") pod \"8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq\" (UID: \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\") " pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.299524 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-bundle\") pod \"8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq\" (UID: \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\") " pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.299718 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlsxx\" (UniqueName: \"kubernetes.io/projected/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-kube-api-access-tlsxx\") pod \"8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq\" (UID: \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\") " pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.299799 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-util\") pod \"8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq\" (UID: \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\") " pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.300983 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-util\") pod \"8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq\" (UID: \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\") " pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.301143 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-bundle\") pod \"8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq\" (UID: \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\") " pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.335735 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlsxx\" (UniqueName: \"kubernetes.io/projected/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-kube-api-access-tlsxx\") pod \"8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq\" (UID: \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\") " pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.434123 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" Oct 04 05:02:26 crc kubenswrapper[4802]: I1004 05:02:26.853140 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq"] Oct 04 05:02:26 crc kubenswrapper[4802]: W1004 05:02:26.867240 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8aefca93_0b5f_4dc0_9e93_4b726272fc8d.slice/crio-7a7bc859469e1b7497ce507e316329ceb583bb8fb5f5a64b22b5e026f911073e WatchSource:0}: Error finding container 7a7bc859469e1b7497ce507e316329ceb583bb8fb5f5a64b22b5e026f911073e: Status 404 returned error can't find the container with id 7a7bc859469e1b7497ce507e316329ceb583bb8fb5f5a64b22b5e026f911073e Oct 04 05:02:27 crc kubenswrapper[4802]: I1004 05:02:27.240492 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" event={"ID":"8aefca93-0b5f-4dc0-9e93-4b726272fc8d","Type":"ContainerStarted","Data":"a7f7695604231207d58f982735b5a5f5c75158474707e4a14bd8efb4b2a693c0"} Oct 04 05:02:27 crc kubenswrapper[4802]: I1004 05:02:27.240545 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" event={"ID":"8aefca93-0b5f-4dc0-9e93-4b726272fc8d","Type":"ContainerStarted","Data":"7a7bc859469e1b7497ce507e316329ceb583bb8fb5f5a64b22b5e026f911073e"} Oct 04 05:02:28 crc kubenswrapper[4802]: I1004 05:02:28.249783 4802 generic.go:334] "Generic (PLEG): container finished" podID="8aefca93-0b5f-4dc0-9e93-4b726272fc8d" containerID="a7f7695604231207d58f982735b5a5f5c75158474707e4a14bd8efb4b2a693c0" exitCode=0 Oct 04 05:02:28 crc kubenswrapper[4802]: I1004 05:02:28.250062 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" event={"ID":"8aefca93-0b5f-4dc0-9e93-4b726272fc8d","Type":"ContainerDied","Data":"a7f7695604231207d58f982735b5a5f5c75158474707e4a14bd8efb4b2a693c0"} Oct 04 05:02:34 crc kubenswrapper[4802]: I1004 05:02:34.291902 4802 generic.go:334] "Generic (PLEG): container finished" podID="8aefca93-0b5f-4dc0-9e93-4b726272fc8d" containerID="1a93f649f7c16d5c734b8e41e9a9b5ebf05789d3c838ded99f50ae10ad6dd260" exitCode=0 Oct 04 05:02:34 crc kubenswrapper[4802]: I1004 05:02:34.292003 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" event={"ID":"8aefca93-0b5f-4dc0-9e93-4b726272fc8d","Type":"ContainerDied","Data":"1a93f649f7c16d5c734b8e41e9a9b5ebf05789d3c838ded99f50ae10ad6dd260"} Oct 04 05:02:35 crc kubenswrapper[4802]: I1004 05:02:35.304330 4802 generic.go:334] "Generic (PLEG): container finished" podID="8aefca93-0b5f-4dc0-9e93-4b726272fc8d" containerID="41060d62a997de60ade59d314b1b4a7927f086bbe533511a25305302d0a99351" exitCode=0 Oct 04 05:02:35 crc kubenswrapper[4802]: I1004 05:02:35.304452 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" event={"ID":"8aefca93-0b5f-4dc0-9e93-4b726272fc8d","Type":"ContainerDied","Data":"41060d62a997de60ade59d314b1b4a7927f086bbe533511a25305302d0a99351"} Oct 04 05:02:36 crc kubenswrapper[4802]: I1004 05:02:36.578025 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" Oct 04 05:02:36 crc kubenswrapper[4802]: I1004 05:02:36.751708 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-util\") pod \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\" (UID: \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\") " Oct 04 05:02:36 crc kubenswrapper[4802]: I1004 05:02:36.752343 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlsxx\" (UniqueName: \"kubernetes.io/projected/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-kube-api-access-tlsxx\") pod \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\" (UID: \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\") " Oct 04 05:02:36 crc kubenswrapper[4802]: I1004 05:02:36.752533 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-bundle\") pod \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\" (UID: \"8aefca93-0b5f-4dc0-9e93-4b726272fc8d\") " Oct 04 05:02:36 crc kubenswrapper[4802]: I1004 05:02:36.753835 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-bundle" (OuterVolumeSpecName: "bundle") pod "8aefca93-0b5f-4dc0-9e93-4b726272fc8d" (UID: "8aefca93-0b5f-4dc0-9e93-4b726272fc8d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:02:36 crc kubenswrapper[4802]: I1004 05:02:36.760893 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-kube-api-access-tlsxx" (OuterVolumeSpecName: "kube-api-access-tlsxx") pod "8aefca93-0b5f-4dc0-9e93-4b726272fc8d" (UID: "8aefca93-0b5f-4dc0-9e93-4b726272fc8d"). InnerVolumeSpecName "kube-api-access-tlsxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:02:36 crc kubenswrapper[4802]: I1004 05:02:36.775009 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-util" (OuterVolumeSpecName: "util") pod "8aefca93-0b5f-4dc0-9e93-4b726272fc8d" (UID: "8aefca93-0b5f-4dc0-9e93-4b726272fc8d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:02:36 crc kubenswrapper[4802]: I1004 05:02:36.854123 4802 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:02:36 crc kubenswrapper[4802]: I1004 05:02:36.854170 4802 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-util\") on node \"crc\" DevicePath \"\"" Oct 04 05:02:36 crc kubenswrapper[4802]: I1004 05:02:36.854182 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlsxx\" (UniqueName: \"kubernetes.io/projected/8aefca93-0b5f-4dc0-9e93-4b726272fc8d-kube-api-access-tlsxx\") on node \"crc\" DevicePath \"\"" Oct 04 05:02:37 crc kubenswrapper[4802]: I1004 05:02:37.322066 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" event={"ID":"8aefca93-0b5f-4dc0-9e93-4b726272fc8d","Type":"ContainerDied","Data":"7a7bc859469e1b7497ce507e316329ceb583bb8fb5f5a64b22b5e026f911073e"} Oct 04 05:02:37 crc kubenswrapper[4802]: I1004 05:02:37.322111 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a7bc859469e1b7497ce507e316329ceb583bb8fb5f5a64b22b5e026f911073e" Oct 04 05:02:37 crc kubenswrapper[4802]: I1004 05:02:37.322154 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq" Oct 04 05:02:43 crc kubenswrapper[4802]: I1004 05:02:43.826999 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c"] Oct 04 05:02:43 crc kubenswrapper[4802]: E1004 05:02:43.828150 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aefca93-0b5f-4dc0-9e93-4b726272fc8d" containerName="util" Oct 04 05:02:43 crc kubenswrapper[4802]: I1004 05:02:43.828175 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aefca93-0b5f-4dc0-9e93-4b726272fc8d" containerName="util" Oct 04 05:02:43 crc kubenswrapper[4802]: E1004 05:02:43.828194 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aefca93-0b5f-4dc0-9e93-4b726272fc8d" containerName="pull" Oct 04 05:02:43 crc kubenswrapper[4802]: I1004 05:02:43.828208 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aefca93-0b5f-4dc0-9e93-4b726272fc8d" containerName="pull" Oct 04 05:02:43 crc kubenswrapper[4802]: E1004 05:02:43.828229 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aefca93-0b5f-4dc0-9e93-4b726272fc8d" containerName="extract" Oct 04 05:02:43 crc kubenswrapper[4802]: I1004 05:02:43.828239 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aefca93-0b5f-4dc0-9e93-4b726272fc8d" containerName="extract" Oct 04 05:02:43 crc kubenswrapper[4802]: I1004 05:02:43.828415 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aefca93-0b5f-4dc0-9e93-4b726272fc8d" containerName="extract" Oct 04 05:02:43 crc kubenswrapper[4802]: I1004 05:02:43.829477 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c" Oct 04 05:02:43 crc kubenswrapper[4802]: I1004 05:02:43.832980 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-lnvdp" Oct 04 05:02:43 crc kubenswrapper[4802]: I1004 05:02:43.846606 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grl5r\" (UniqueName: \"kubernetes.io/projected/b14a29f9-9112-4384-95e9-cd2ecb3e3c4b-kube-api-access-grl5r\") pod \"openstack-operator-controller-operator-7b96bd67b7-wvw9c\" (UID: \"b14a29f9-9112-4384-95e9-cd2ecb3e3c4b\") " pod="openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c" Oct 04 05:02:43 crc kubenswrapper[4802]: I1004 05:02:43.867242 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c"] Oct 04 05:02:43 crc kubenswrapper[4802]: I1004 05:02:43.947986 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grl5r\" (UniqueName: \"kubernetes.io/projected/b14a29f9-9112-4384-95e9-cd2ecb3e3c4b-kube-api-access-grl5r\") pod \"openstack-operator-controller-operator-7b96bd67b7-wvw9c\" (UID: \"b14a29f9-9112-4384-95e9-cd2ecb3e3c4b\") " pod="openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c" Oct 04 05:02:43 crc kubenswrapper[4802]: I1004 05:02:43.969575 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grl5r\" (UniqueName: \"kubernetes.io/projected/b14a29f9-9112-4384-95e9-cd2ecb3e3c4b-kube-api-access-grl5r\") pod \"openstack-operator-controller-operator-7b96bd67b7-wvw9c\" (UID: \"b14a29f9-9112-4384-95e9-cd2ecb3e3c4b\") " pod="openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c" Oct 04 05:02:44 crc kubenswrapper[4802]: I1004 05:02:44.155591 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c" Oct 04 05:02:44 crc kubenswrapper[4802]: I1004 05:02:44.392609 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c"] Oct 04 05:02:45 crc kubenswrapper[4802]: I1004 05:02:45.380906 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c" event={"ID":"b14a29f9-9112-4384-95e9-cd2ecb3e3c4b","Type":"ContainerStarted","Data":"5f6c11a0ebc3f84f4f3afaf46defb0c786d45d913172e9aa306b5b6fed97d775"} Oct 04 05:02:48 crc kubenswrapper[4802]: I1004 05:02:48.400332 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c" event={"ID":"b14a29f9-9112-4384-95e9-cd2ecb3e3c4b","Type":"ContainerStarted","Data":"7a1118a4588913e48c3cbcb40ee79bb73ee7d43ca1556b2666b75259ca11ce10"} Oct 04 05:02:51 crc kubenswrapper[4802]: I1004 05:02:51.430512 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c" event={"ID":"b14a29f9-9112-4384-95e9-cd2ecb3e3c4b","Type":"ContainerStarted","Data":"199373be5a5b46d56673845ab209f4bdfbeab115c04a198b7a1cd7bd5b9f15d5"} Oct 04 05:02:51 crc kubenswrapper[4802]: I1004 05:02:51.431362 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c" Oct 04 05:02:51 crc kubenswrapper[4802]: I1004 05:02:51.469296 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c" podStartSLOduration=2.188049373 podStartE2EDuration="8.469275725s" podCreationTimestamp="2025-10-04 05:02:43 +0000 UTC" firstStartedPulling="2025-10-04 05:02:44.400690558 +0000 UTC m=+1006.808691183" lastFinishedPulling="2025-10-04 05:02:50.68191691 +0000 UTC m=+1013.089917535" observedRunningTime="2025-10-04 05:02:51.462551692 +0000 UTC m=+1013.870552317" watchObservedRunningTime="2025-10-04 05:02:51.469275725 +0000 UTC m=+1013.877276350" Oct 04 05:02:54 crc kubenswrapper[4802]: I1004 05:02:54.161068 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7b96bd67b7-wvw9c" Oct 04 05:03:22 crc kubenswrapper[4802]: I1004 05:03:22.662210 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:03:22 crc kubenswrapper[4802]: I1004 05:03:22.663066 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.375156 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.377093 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.381958 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-nj7jt" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.383473 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.384757 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.387831 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-w8dmc" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.389359 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.405868 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.410997 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.412596 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.414754 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-7dq92" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.421954 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.442727 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.443925 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.447413 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.453005 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6d6sf" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.471317 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.472338 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.482680 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-t7288" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.493698 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.511067 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.512470 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.516966 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-xmmcp" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.516971 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.527873 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.529090 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.534957 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-btp9h" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.553743 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.556576 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.576502 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwngk\" (UniqueName: \"kubernetes.io/projected/e8f690af-8476-4fae-821f-cc822c9a1273-kube-api-access-lwngk\") pod \"barbican-operator-controller-manager-5f7c849b98-qwpjf\" (UID: \"e8f690af-8476-4fae-821f-cc822c9a1273\") " pod="openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.576563 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8975b7de-977f-45a6-b619-1bae2838c9eb-cert\") pod \"infra-operator-controller-manager-658588b8c9-gj2l2\" (UID: \"8975b7de-977f-45a6-b619-1bae2838c9eb\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.576597 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjd44\" (UniqueName: \"kubernetes.io/projected/d877d19f-e34d-429e-8bf7-e7f9c6c141d1-kube-api-access-cjd44\") pod \"heat-operator-controller-manager-8f58bc9db-bczqd\" (UID: \"d877d19f-e34d-429e-8bf7-e7f9c6c141d1\") " pod="openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.576626 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbkpm\" (UniqueName: \"kubernetes.io/projected/fe92c819-2989-4bb4-8051-6d457ac6b121-kube-api-access-cbkpm\") pod \"designate-operator-controller-manager-75dfd9b554-tbhkj\" (UID: \"fe92c819-2989-4bb4-8051-6d457ac6b121\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.576681 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vp7\" (UniqueName: \"kubernetes.io/projected/50a3c2cf-e05f-43ac-833a-1ae097417c9b-kube-api-access-47vp7\") pod \"cinder-operator-controller-manager-7654479b5b-qdnwx\" (UID: \"50a3c2cf-e05f-43ac-833a-1ae097417c9b\") " pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.576702 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zd77\" (UniqueName: \"kubernetes.io/projected/9f060f22-b0a6-41e0-b88e-3c4411b06f1d-kube-api-access-8zd77\") pod \"horizon-operator-controller-manager-54876c876f-c2wf7\" (UID: \"9f060f22-b0a6-41e0-b88e-3c4411b06f1d\") " pod="openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.576739 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q52v6\" (UniqueName: \"kubernetes.io/projected/cc8e0f09-d68e-4cab-af19-86180b78cb70-kube-api-access-q52v6\") pod \"glance-operator-controller-manager-5568b5d68-g9stf\" (UID: \"cc8e0f09-d68e-4cab-af19-86180b78cb70\") " pod="openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.576761 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hnrx\" (UniqueName: \"kubernetes.io/projected/8975b7de-977f-45a6-b619-1bae2838c9eb-kube-api-access-7hnrx\") pod \"infra-operator-controller-manager-658588b8c9-gj2l2\" (UID: \"8975b7de-977f-45a6-b619-1bae2838c9eb\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.577507 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.578683 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.585047 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-lfgfg" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.605282 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.610874 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.611961 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.615171 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-qn2gm" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.620151 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.621131 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.625031 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-f99h9" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.636496 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.682313 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbkpm\" (UniqueName: \"kubernetes.io/projected/fe92c819-2989-4bb4-8051-6d457ac6b121-kube-api-access-cbkpm\") pod \"designate-operator-controller-manager-75dfd9b554-tbhkj\" (UID: \"fe92c819-2989-4bb4-8051-6d457ac6b121\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.682406 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47vp7\" (UniqueName: \"kubernetes.io/projected/50a3c2cf-e05f-43ac-833a-1ae097417c9b-kube-api-access-47vp7\") pod \"cinder-operator-controller-manager-7654479b5b-qdnwx\" (UID: \"50a3c2cf-e05f-43ac-833a-1ae097417c9b\") " pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.682440 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zd77\" (UniqueName: \"kubernetes.io/projected/9f060f22-b0a6-41e0-b88e-3c4411b06f1d-kube-api-access-8zd77\") pod \"horizon-operator-controller-manager-54876c876f-c2wf7\" (UID: \"9f060f22-b0a6-41e0-b88e-3c4411b06f1d\") " pod="openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.682471 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hnrx\" (UniqueName: \"kubernetes.io/projected/8975b7de-977f-45a6-b619-1bae2838c9eb-kube-api-access-7hnrx\") pod \"infra-operator-controller-manager-658588b8c9-gj2l2\" (UID: \"8975b7de-977f-45a6-b619-1bae2838c9eb\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.682510 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q52v6\" (UniqueName: \"kubernetes.io/projected/cc8e0f09-d68e-4cab-af19-86180b78cb70-kube-api-access-q52v6\") pod \"glance-operator-controller-manager-5568b5d68-g9stf\" (UID: \"cc8e0f09-d68e-4cab-af19-86180b78cb70\") " pod="openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.683823 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwngk\" (UniqueName: \"kubernetes.io/projected/e8f690af-8476-4fae-821f-cc822c9a1273-kube-api-access-lwngk\") pod \"barbican-operator-controller-manager-5f7c849b98-qwpjf\" (UID: \"e8f690af-8476-4fae-821f-cc822c9a1273\") " pod="openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.683921 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8975b7de-977f-45a6-b619-1bae2838c9eb-cert\") pod \"infra-operator-controller-manager-658588b8c9-gj2l2\" (UID: \"8975b7de-977f-45a6-b619-1bae2838c9eb\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.683997 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjd44\" (UniqueName: \"kubernetes.io/projected/d877d19f-e34d-429e-8bf7-e7f9c6c141d1-kube-api-access-cjd44\") pod \"heat-operator-controller-manager-8f58bc9db-bczqd\" (UID: \"d877d19f-e34d-429e-8bf7-e7f9c6c141d1\") " pod="openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd" Oct 04 05:03:25 crc kubenswrapper[4802]: E1004 05:03:25.685230 4802 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 04 05:03:25 crc kubenswrapper[4802]: E1004 05:03:25.685325 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8975b7de-977f-45a6-b619-1bae2838c9eb-cert podName:8975b7de-977f-45a6-b619-1bae2838c9eb nodeName:}" failed. No retries permitted until 2025-10-04 05:03:26.185301273 +0000 UTC m=+1048.593301898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8975b7de-977f-45a6-b619-1bae2838c9eb-cert") pod "infra-operator-controller-manager-658588b8c9-gj2l2" (UID: "8975b7de-977f-45a6-b619-1bae2838c9eb") : secret "infra-operator-webhook-server-cert" not found Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.691036 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.713220 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.713948 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q52v6\" (UniqueName: \"kubernetes.io/projected/cc8e0f09-d68e-4cab-af19-86180b78cb70-kube-api-access-q52v6\") pod \"glance-operator-controller-manager-5568b5d68-g9stf\" (UID: \"cc8e0f09-d68e-4cab-af19-86180b78cb70\") " pod="openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.714238 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbkpm\" (UniqueName: \"kubernetes.io/projected/fe92c819-2989-4bb4-8051-6d457ac6b121-kube-api-access-cbkpm\") pod \"designate-operator-controller-manager-75dfd9b554-tbhkj\" (UID: \"fe92c819-2989-4bb4-8051-6d457ac6b121\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.714721 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zd77\" (UniqueName: \"kubernetes.io/projected/9f060f22-b0a6-41e0-b88e-3c4411b06f1d-kube-api-access-8zd77\") pod \"horizon-operator-controller-manager-54876c876f-c2wf7\" (UID: \"9f060f22-b0a6-41e0-b88e-3c4411b06f1d\") " pod="openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.722393 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjd44\" (UniqueName: \"kubernetes.io/projected/d877d19f-e34d-429e-8bf7-e7f9c6c141d1-kube-api-access-cjd44\") pod \"heat-operator-controller-manager-8f58bc9db-bczqd\" (UID: \"d877d19f-e34d-429e-8bf7-e7f9c6c141d1\") " pod="openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.733565 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.733762 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.735299 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.737810 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwngk\" (UniqueName: \"kubernetes.io/projected/e8f690af-8476-4fae-821f-cc822c9a1273-kube-api-access-lwngk\") pod \"barbican-operator-controller-manager-5f7c849b98-qwpjf\" (UID: \"e8f690af-8476-4fae-821f-cc822c9a1273\") " pod="openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.745441 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hnrx\" (UniqueName: \"kubernetes.io/projected/8975b7de-977f-45a6-b619-1bae2838c9eb-kube-api-access-7hnrx\") pod \"infra-operator-controller-manager-658588b8c9-gj2l2\" (UID: \"8975b7de-977f-45a6-b619-1bae2838c9eb\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.745894 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47vp7\" (UniqueName: \"kubernetes.io/projected/50a3c2cf-e05f-43ac-833a-1ae097417c9b-kube-api-access-47vp7\") pod \"cinder-operator-controller-manager-7654479b5b-qdnwx\" (UID: \"50a3c2cf-e05f-43ac-833a-1ae097417c9b\") " pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.746317 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-qmks7" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.761746 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.763101 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.772033 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.774969 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9tqzh" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.793010 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q7k9\" (UniqueName: \"kubernetes.io/projected/c7242ee7-cb17-471e-8639-fd36ccd2d398-kube-api-access-5q7k9\") pod \"manila-operator-controller-manager-65d89cfd9f-6pkx2\" (UID: \"c7242ee7-cb17-471e-8639-fd36ccd2d398\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.793061 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnpkx\" (UniqueName: \"kubernetes.io/projected/f22b8bf6-a48f-41e3-88f4-601a8befda4b-kube-api-access-jnpkx\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-xqp95\" (UID: \"f22b8bf6-a48f-41e3-88f4-601a8befda4b\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.793109 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlpb6\" (UniqueName: \"kubernetes.io/projected/d1ae58bd-7435-4bb1-819c-2f085a231ce0-kube-api-access-hlpb6\") pod \"keystone-operator-controller-manager-655d88ccb9-6zzzc\" (UID: \"d1ae58bd-7435-4bb1-819c-2f085a231ce0\") " pod="openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.793129 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqdxk\" (UniqueName: \"kubernetes.io/projected/33f312f2-394b-4ce5-965d-69a464079f55-kube-api-access-lqdxk\") pod \"neutron-operator-controller-manager-8d984cc4d-dhhv6\" (UID: \"33f312f2-394b-4ce5-965d-69a464079f55\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.793150 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dxqc\" (UniqueName: \"kubernetes.io/projected/e59f4cc7-3f2d-43a7-91f6-cf589392f5fa-kube-api-access-5dxqc\") pod \"ironic-operator-controller-manager-699b87f775-sgw2w\" (UID: \"e59f4cc7-3f2d-43a7-91f6-cf589392f5fa\") " pod="openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.804407 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.852021 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.852573 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.878451 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.881320 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.887923 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-s8glf" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.894447 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q7k9\" (UniqueName: \"kubernetes.io/projected/c7242ee7-cb17-471e-8639-fd36ccd2d398-kube-api-access-5q7k9\") pod \"manila-operator-controller-manager-65d89cfd9f-6pkx2\" (UID: \"c7242ee7-cb17-471e-8639-fd36ccd2d398\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.894534 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbzbl\" (UniqueName: \"kubernetes.io/projected/16c65360-b9da-46ee-807a-7a508bb5b97b-kube-api-access-qbzbl\") pod \"nova-operator-controller-manager-7c7fc454ff-d7g7m\" (UID: \"16c65360-b9da-46ee-807a-7a508bb5b97b\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.894580 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnpkx\" (UniqueName: \"kubernetes.io/projected/f22b8bf6-a48f-41e3-88f4-601a8befda4b-kube-api-access-jnpkx\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-xqp95\" (UID: \"f22b8bf6-a48f-41e3-88f4-601a8befda4b\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.894665 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlpb6\" (UniqueName: \"kubernetes.io/projected/d1ae58bd-7435-4bb1-819c-2f085a231ce0-kube-api-access-hlpb6\") pod \"keystone-operator-controller-manager-655d88ccb9-6zzzc\" (UID: \"d1ae58bd-7435-4bb1-819c-2f085a231ce0\") " pod="openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.894689 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqdxk\" (UniqueName: \"kubernetes.io/projected/33f312f2-394b-4ce5-965d-69a464079f55-kube-api-access-lqdxk\") pod \"neutron-operator-controller-manager-8d984cc4d-dhhv6\" (UID: \"33f312f2-394b-4ce5-965d-69a464079f55\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.894716 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dxqc\" (UniqueName: \"kubernetes.io/projected/e59f4cc7-3f2d-43a7-91f6-cf589392f5fa-kube-api-access-5dxqc\") pod \"ironic-operator-controller-manager-699b87f775-sgw2w\" (UID: \"e59f4cc7-3f2d-43a7-91f6-cf589392f5fa\") " pod="openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.917279 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.922908 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.924178 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.927944 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-fqs4n" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.952832 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnpkx\" (UniqueName: \"kubernetes.io/projected/f22b8bf6-a48f-41e3-88f4-601a8befda4b-kube-api-access-jnpkx\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-xqp95\" (UID: \"f22b8bf6-a48f-41e3-88f4-601a8befda4b\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.955000 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.961392 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dxqc\" (UniqueName: \"kubernetes.io/projected/e59f4cc7-3f2d-43a7-91f6-cf589392f5fa-kube-api-access-5dxqc\") pod \"ironic-operator-controller-manager-699b87f775-sgw2w\" (UID: \"e59f4cc7-3f2d-43a7-91f6-cf589392f5fa\") " pod="openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.968420 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf"] Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.970165 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.970184 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqdxk\" (UniqueName: \"kubernetes.io/projected/33f312f2-394b-4ce5-965d-69a464079f55-kube-api-access-lqdxk\") pod \"neutron-operator-controller-manager-8d984cc4d-dhhv6\" (UID: \"33f312f2-394b-4ce5-965d-69a464079f55\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.987269 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q7k9\" (UniqueName: \"kubernetes.io/projected/c7242ee7-cb17-471e-8639-fd36ccd2d398-kube-api-access-5q7k9\") pod \"manila-operator-controller-manager-65d89cfd9f-6pkx2\" (UID: \"c7242ee7-cb17-471e-8639-fd36ccd2d398\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.993224 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-nh6dk" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.993451 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.995723 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dghg\" (UniqueName: \"kubernetes.io/projected/31ac0023-c7da-49a3-8276-063e6c7b8a38-kube-api-access-5dghg\") pod \"octavia-operator-controller-manager-7468f855d8-xc8rb\" (UID: \"31ac0023-c7da-49a3-8276-063e6c7b8a38\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.995759 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7c82134-0489-4a84-91b2-de5e9ff651a3-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf\" (UID: \"e7c82134-0489-4a84-91b2-de5e9ff651a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.995790 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbzbl\" (UniqueName: \"kubernetes.io/projected/16c65360-b9da-46ee-807a-7a508bb5b97b-kube-api-access-qbzbl\") pod \"nova-operator-controller-manager-7c7fc454ff-d7g7m\" (UID: \"16c65360-b9da-46ee-807a-7a508bb5b97b\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" Oct 04 05:03:25 crc kubenswrapper[4802]: I1004 05:03:25.995852 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhl82\" (UniqueName: \"kubernetes.io/projected/e7c82134-0489-4a84-91b2-de5e9ff651a3-kube-api-access-zhl82\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf\" (UID: \"e7c82134-0489-4a84-91b2-de5e9ff651a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.006918 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.020818 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.033078 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlpb6\" (UniqueName: \"kubernetes.io/projected/d1ae58bd-7435-4bb1-819c-2f085a231ce0-kube-api-access-hlpb6\") pod \"keystone-operator-controller-manager-655d88ccb9-6zzzc\" (UID: \"d1ae58bd-7435-4bb1-819c-2f085a231ce0\") " pod="openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.033168 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.034424 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.061085 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.072083 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-n62vk" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.073386 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbzbl\" (UniqueName: \"kubernetes.io/projected/16c65360-b9da-46ee-807a-7a508bb5b97b-kube-api-access-qbzbl\") pod \"nova-operator-controller-manager-7c7fc454ff-d7g7m\" (UID: \"16c65360-b9da-46ee-807a-7a508bb5b97b\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.075461 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.076558 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.096867 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9vpl\" (UniqueName: \"kubernetes.io/projected/26a2d08a-3357-48e4-8fda-e2fbe339e7a8-kube-api-access-r9vpl\") pod \"ovn-operator-controller-manager-579449c7d5-khgqf\" (UID: \"26a2d08a-3357-48e4-8fda-e2fbe339e7a8\") " pod="openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.096929 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhl82\" (UniqueName: \"kubernetes.io/projected/e7c82134-0489-4a84-91b2-de5e9ff651a3-kube-api-access-zhl82\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf\" (UID: \"e7c82134-0489-4a84-91b2-de5e9ff651a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.096995 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsnbb\" (UniqueName: \"kubernetes.io/projected/da0eb8e5-72e3-4d6c-b896-e337363ed73c-kube-api-access-nsnbb\") pod \"placement-operator-controller-manager-54689d9f88-dhxm9\" (UID: \"da0eb8e5-72e3-4d6c-b896-e337363ed73c\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.097051 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dghg\" (UniqueName: \"kubernetes.io/projected/31ac0023-c7da-49a3-8276-063e6c7b8a38-kube-api-access-5dghg\") pod \"octavia-operator-controller-manager-7468f855d8-xc8rb\" (UID: \"31ac0023-c7da-49a3-8276-063e6c7b8a38\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.097116 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7c82134-0489-4a84-91b2-de5e9ff651a3-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf\" (UID: \"e7c82134-0489-4a84-91b2-de5e9ff651a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.098471 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-d2wkp" Oct 04 05:03:26 crc kubenswrapper[4802]: E1004 05:03:26.099465 4802 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 04 05:03:26 crc kubenswrapper[4802]: E1004 05:03:26.099534 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7c82134-0489-4a84-91b2-de5e9ff651a3-cert podName:e7c82134-0489-4a84-91b2-de5e9ff651a3 nodeName:}" failed. No retries permitted until 2025-10-04 05:03:26.599516069 +0000 UTC m=+1049.007516694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e7c82134-0489-4a84-91b2-de5e9ff651a3-cert") pod "openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" (UID: "e7c82134-0489-4a84-91b2-de5e9ff651a3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.134880 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.153708 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-m758l"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.154971 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-m758l" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.163375 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2tq9n" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.164220 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dghg\" (UniqueName: \"kubernetes.io/projected/31ac0023-c7da-49a3-8276-063e6c7b8a38-kube-api-access-5dghg\") pod \"octavia-operator-controller-manager-7468f855d8-xc8rb\" (UID: \"31ac0023-c7da-49a3-8276-063e6c7b8a38\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.169254 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhl82\" (UniqueName: \"kubernetes.io/projected/e7c82134-0489-4a84-91b2-de5e9ff651a3-kube-api-access-zhl82\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf\" (UID: \"e7c82134-0489-4a84-91b2-de5e9ff651a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.193500 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.199870 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.200563 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9vpl\" (UniqueName: \"kubernetes.io/projected/26a2d08a-3357-48e4-8fda-e2fbe339e7a8-kube-api-access-r9vpl\") pod \"ovn-operator-controller-manager-579449c7d5-khgqf\" (UID: \"26a2d08a-3357-48e4-8fda-e2fbe339e7a8\") " pod="openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.215254 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8975b7de-977f-45a6-b619-1bae2838c9eb-cert\") pod \"infra-operator-controller-manager-658588b8c9-gj2l2\" (UID: \"8975b7de-977f-45a6-b619-1bae2838c9eb\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.215295 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsnbb\" (UniqueName: \"kubernetes.io/projected/da0eb8e5-72e3-4d6c-b896-e337363ed73c-kube-api-access-nsnbb\") pod \"placement-operator-controller-manager-54689d9f88-dhxm9\" (UID: \"da0eb8e5-72e3-4d6c-b896-e337363ed73c\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.215389 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tkbh\" (UniqueName: \"kubernetes.io/projected/dd3b5879-0e5a-41f9-ab00-16fc063260cc-kube-api-access-2tkbh\") pod \"swift-operator-controller-manager-6859f9b676-m758l\" (UID: \"dd3b5879-0e5a-41f9-ab00-16fc063260cc\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-m758l" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.209196 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w" Oct 04 05:03:26 crc kubenswrapper[4802]: E1004 05:03:26.216165 4802 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 04 05:03:26 crc kubenswrapper[4802]: E1004 05:03:26.216217 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8975b7de-977f-45a6-b619-1bae2838c9eb-cert podName:8975b7de-977f-45a6-b619-1bae2838c9eb nodeName:}" failed. No retries permitted until 2025-10-04 05:03:27.216191203 +0000 UTC m=+1049.624191828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8975b7de-977f-45a6-b619-1bae2838c9eb-cert") pod "infra-operator-controller-manager-658588b8c9-gj2l2" (UID: "8975b7de-977f-45a6-b619-1bae2838c9eb") : secret "infra-operator-webhook-server-cert" not found Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.219966 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.221194 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.236288 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-vwz9h" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.244080 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.244131 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.248429 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsnbb\" (UniqueName: \"kubernetes.io/projected/da0eb8e5-72e3-4d6c-b896-e337363ed73c-kube-api-access-nsnbb\") pod \"placement-operator-controller-manager-54689d9f88-dhxm9\" (UID: \"da0eb8e5-72e3-4d6c-b896-e337363ed73c\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.253285 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.267704 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9vpl\" (UniqueName: \"kubernetes.io/projected/26a2d08a-3357-48e4-8fda-e2fbe339e7a8-kube-api-access-r9vpl\") pod \"ovn-operator-controller-manager-579449c7d5-khgqf\" (UID: \"26a2d08a-3357-48e4-8fda-e2fbe339e7a8\") " pod="openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.291816 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-m758l"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.295016 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.317798 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tkbh\" (UniqueName: \"kubernetes.io/projected/dd3b5879-0e5a-41f9-ab00-16fc063260cc-kube-api-access-2tkbh\") pod \"swift-operator-controller-manager-6859f9b676-m758l\" (UID: \"dd3b5879-0e5a-41f9-ab00-16fc063260cc\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-m758l" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.317878 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjmc5\" (UniqueName: \"kubernetes.io/projected/2a7bf534-0e93-4374-9eca-3015e9739b8b-kube-api-access-vjmc5\") pod \"telemetry-operator-controller-manager-5d4d74dd89-mfm5d\" (UID: \"2a7bf534-0e93-4374-9eca-3015e9739b8b\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.322351 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.325953 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.354690 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tkbh\" (UniqueName: \"kubernetes.io/projected/dd3b5879-0e5a-41f9-ab00-16fc063260cc-kube-api-access-2tkbh\") pod \"swift-operator-controller-manager-6859f9b676-m758l\" (UID: \"dd3b5879-0e5a-41f9-ab00-16fc063260cc\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-m758l" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.378718 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.384008 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-k5gvb" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.423624 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b72kz\" (UniqueName: \"kubernetes.io/projected/d90631a7-c7d2-4e82-a841-21980a76d784-kube-api-access-b72kz\") pod \"test-operator-controller-manager-5cd5cb47d7-pfhnt\" (UID: \"d90631a7-c7d2-4e82-a841-21980a76d784\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.423716 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjmc5\" (UniqueName: \"kubernetes.io/projected/2a7bf534-0e93-4374-9eca-3015e9739b8b-kube-api-access-vjmc5\") pod \"telemetry-operator-controller-manager-5d4d74dd89-mfm5d\" (UID: \"2a7bf534-0e93-4374-9eca-3015e9739b8b\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.442662 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.511504 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjmc5\" (UniqueName: \"kubernetes.io/projected/2a7bf534-0e93-4374-9eca-3015e9739b8b-kube-api-access-vjmc5\") pod \"telemetry-operator-controller-manager-5d4d74dd89-mfm5d\" (UID: \"2a7bf534-0e93-4374-9eca-3015e9739b8b\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.515388 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.520252 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.524911 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b72kz\" (UniqueName: \"kubernetes.io/projected/d90631a7-c7d2-4e82-a841-21980a76d784-kube-api-access-b72kz\") pod \"test-operator-controller-manager-5cd5cb47d7-pfhnt\" (UID: \"d90631a7-c7d2-4e82-a841-21980a76d784\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.536511 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.536575 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.536674 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.540441 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.540565 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.558390 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-nscfv" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.586240 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b72kz\" (UniqueName: \"kubernetes.io/projected/d90631a7-c7d2-4e82-a841-21980a76d784-kube-api-access-b72kz\") pod \"test-operator-controller-manager-5cd5cb47d7-pfhnt\" (UID: \"d90631a7-c7d2-4e82-a841-21980a76d784\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.613750 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.615435 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.618226 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-znj9w" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.619738 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.622021 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-m758l" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.622717 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.632150 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.632879 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpmk5\" (UniqueName: \"kubernetes.io/projected/566c2a23-aea4-4ea6-9820-666a22d36d99-kube-api-access-zpmk5\") pod \"watcher-operator-controller-manager-6cbc6dd547-cjxd2\" (UID: \"566c2a23-aea4-4ea6-9820-666a22d36d99\") " pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.633053 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7c82134-0489-4a84-91b2-de5e9ff651a3-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf\" (UID: \"e7c82134-0489-4a84-91b2-de5e9ff651a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.633475 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.640567 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-hzzv6" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.651952 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.661498 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7c82134-0489-4a84-91b2-de5e9ff651a3-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf\" (UID: \"e7c82134-0489-4a84-91b2-de5e9ff651a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.672247 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.746385 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66a174f5-be2c-4fb0-92f0-4cb911033d87-cert\") pod \"openstack-operator-controller-manager-6f965f7c8f-wrfgq\" (UID: \"66a174f5-be2c-4fb0-92f0-4cb911033d87\") " pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.746523 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpmk5\" (UniqueName: \"kubernetes.io/projected/566c2a23-aea4-4ea6-9820-666a22d36d99-kube-api-access-zpmk5\") pod \"watcher-operator-controller-manager-6cbc6dd547-cjxd2\" (UID: \"566c2a23-aea4-4ea6-9820-666a22d36d99\") " pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.754117 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh9n6\" (UniqueName: \"kubernetes.io/projected/66a174f5-be2c-4fb0-92f0-4cb911033d87-kube-api-access-xh9n6\") pod \"openstack-operator-controller-manager-6f965f7c8f-wrfgq\" (UID: \"66a174f5-be2c-4fb0-92f0-4cb911033d87\") " pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.754195 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6x69\" (UniqueName: \"kubernetes.io/projected/f72da0d9-79ad-4717-9b92-f45533584fb7-kube-api-access-p6x69\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-xbjts\" (UID: \"f72da0d9-79ad-4717-9b92-f45533584fb7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.769841 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpmk5\" (UniqueName: \"kubernetes.io/projected/566c2a23-aea4-4ea6-9820-666a22d36d99-kube-api-access-zpmk5\") pod \"watcher-operator-controller-manager-6cbc6dd547-cjxd2\" (UID: \"566c2a23-aea4-4ea6-9820-666a22d36d99\") " pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.809305 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj"] Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.864705 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh9n6\" (UniqueName: \"kubernetes.io/projected/66a174f5-be2c-4fb0-92f0-4cb911033d87-kube-api-access-xh9n6\") pod \"openstack-operator-controller-manager-6f965f7c8f-wrfgq\" (UID: \"66a174f5-be2c-4fb0-92f0-4cb911033d87\") " pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.864754 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6x69\" (UniqueName: \"kubernetes.io/projected/f72da0d9-79ad-4717-9b92-f45533584fb7-kube-api-access-p6x69\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-xbjts\" (UID: \"f72da0d9-79ad-4717-9b92-f45533584fb7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.864863 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66a174f5-be2c-4fb0-92f0-4cb911033d87-cert\") pod \"openstack-operator-controller-manager-6f965f7c8f-wrfgq\" (UID: \"66a174f5-be2c-4fb0-92f0-4cb911033d87\") " pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" Oct 04 05:03:26 crc kubenswrapper[4802]: E1004 05:03:26.865041 4802 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 04 05:03:26 crc kubenswrapper[4802]: E1004 05:03:26.865100 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66a174f5-be2c-4fb0-92f0-4cb911033d87-cert podName:66a174f5-be2c-4fb0-92f0-4cb911033d87 nodeName:}" failed. No retries permitted until 2025-10-04 05:03:27.365081974 +0000 UTC m=+1049.773082599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66a174f5-be2c-4fb0-92f0-4cb911033d87-cert") pod "openstack-operator-controller-manager-6f965f7c8f-wrfgq" (UID: "66a174f5-be2c-4fb0-92f0-4cb911033d87") : secret "webhook-server-cert" not found Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.883197 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.895268 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6x69\" (UniqueName: \"kubernetes.io/projected/f72da0d9-79ad-4717-9b92-f45533584fb7-kube-api-access-p6x69\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-xbjts\" (UID: \"f72da0d9-79ad-4717-9b92-f45533584fb7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.898581 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh9n6\" (UniqueName: \"kubernetes.io/projected/66a174f5-be2c-4fb0-92f0-4cb911033d87-kube-api-access-xh9n6\") pod \"openstack-operator-controller-manager-6f965f7c8f-wrfgq\" (UID: \"66a174f5-be2c-4fb0-92f0-4cb911033d87\") " pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" Oct 04 05:03:26 crc kubenswrapper[4802]: I1004 05:03:26.912047 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.030074 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts" Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.049857 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7"] Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.059115 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf"] Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.180937 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w"] Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.275747 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8975b7de-977f-45a6-b619-1bae2838c9eb-cert\") pod \"infra-operator-controller-manager-658588b8c9-gj2l2\" (UID: \"8975b7de-977f-45a6-b619-1bae2838c9eb\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.280788 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8975b7de-977f-45a6-b619-1bae2838c9eb-cert\") pod \"infra-operator-controller-manager-658588b8c9-gj2l2\" (UID: \"8975b7de-977f-45a6-b619-1bae2838c9eb\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.331190 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.378128 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66a174f5-be2c-4fb0-92f0-4cb911033d87-cert\") pod \"openstack-operator-controller-manager-6f965f7c8f-wrfgq\" (UID: \"66a174f5-be2c-4fb0-92f0-4cb911033d87\") " pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" Oct 04 05:03:27 crc kubenswrapper[4802]: E1004 05:03:27.378454 4802 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 04 05:03:27 crc kubenswrapper[4802]: E1004 05:03:27.378544 4802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66a174f5-be2c-4fb0-92f0-4cb911033d87-cert podName:66a174f5-be2c-4fb0-92f0-4cb911033d87 nodeName:}" failed. No retries permitted until 2025-10-04 05:03:28.378501252 +0000 UTC m=+1050.786501877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66a174f5-be2c-4fb0-92f0-4cb911033d87-cert") pod "openstack-operator-controller-manager-6f965f7c8f-wrfgq" (UID: "66a174f5-be2c-4fb0-92f0-4cb911033d87") : secret "webhook-server-cert" not found Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.487784 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc"] Oct 04 05:03:27 crc kubenswrapper[4802]: W1004 05:03:27.499327 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1ae58bd_7435_4bb1_819c_2f085a231ce0.slice/crio-b8b450e1290b2cf7f3ffd1d633d838b9025aa098f9a6d8362d3cf57712a584c5 WatchSource:0}: Error finding container b8b450e1290b2cf7f3ffd1d633d838b9025aa098f9a6d8362d3cf57712a584c5: Status 404 returned error can't find the container with id b8b450e1290b2cf7f3ffd1d633d838b9025aa098f9a6d8362d3cf57712a584c5 Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.505225 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb"] Oct 04 05:03:27 crc kubenswrapper[4802]: W1004 05:03:27.507901 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31ac0023_c7da_49a3_8276_063e6c7b8a38.slice/crio-b3e489f800b387d1297ae32b248db8b86fbd3a1692bc1d19adfffee0c690da5d WatchSource:0}: Error finding container b3e489f800b387d1297ae32b248db8b86fbd3a1692bc1d19adfffee0c690da5d: Status 404 returned error can't find the container with id b3e489f800b387d1297ae32b248db8b86fbd3a1692bc1d19adfffee0c690da5d Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.527209 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95"] Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.534184 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx"] Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.539676 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd"] Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.546666 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf"] Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.554665 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6"] Oct 04 05:03:27 crc kubenswrapper[4802]: W1004 05:03:27.555789 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33f312f2_394b_4ce5_965d_69a464079f55.slice/crio-5df7b032c6a99e3de23199d0463719adf73934e0ae7cc64b6622badb2c80e8b0 WatchSource:0}: Error finding container 5df7b032c6a99e3de23199d0463719adf73934e0ae7cc64b6622badb2c80e8b0: Status 404 returned error can't find the container with id 5df7b032c6a99e3de23199d0463719adf73934e0ae7cc64b6622badb2c80e8b0 Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.628206 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf"] Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.645750 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2"] Oct 04 05:03:27 crc kubenswrapper[4802]: W1004 05:03:27.663306 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7242ee7_cb17_471e_8639_fd36ccd2d398.slice/crio-dbe5f70711210abac483f0f465787fe69f7ae10389a225c5ff13ba46afbe4e73 WatchSource:0}: Error finding container dbe5f70711210abac483f0f465787fe69f7ae10389a225c5ff13ba46afbe4e73: Status 404 returned error can't find the container with id dbe5f70711210abac483f0f465787fe69f7ae10389a225c5ff13ba46afbe4e73 Oct 04 05:03:27 crc kubenswrapper[4802]: W1004 05:03:27.663985 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26a2d08a_3357_48e4_8fda_e2fbe339e7a8.slice/crio-66bc8d7c36201d3aa0bc904fa586ac97e68219753e801131cf9f8ba38648213c WatchSource:0}: Error finding container 66bc8d7c36201d3aa0bc904fa586ac97e68219753e801131cf9f8ba38648213c: Status 404 returned error can't find the container with id 66bc8d7c36201d3aa0bc904fa586ac97e68219753e801131cf9f8ba38648213c Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.888034 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc" event={"ID":"d1ae58bd-7435-4bb1-819c-2f085a231ce0","Type":"ContainerStarted","Data":"b8b450e1290b2cf7f3ffd1d633d838b9025aa098f9a6d8362d3cf57712a584c5"} Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.904033 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7" event={"ID":"9f060f22-b0a6-41e0-b88e-3c4411b06f1d","Type":"ContainerStarted","Data":"82b78abe9d75f73c7ec42050038157d660611e79cd039e0f3dc4f64c56023a08"} Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.909027 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf" event={"ID":"e8f690af-8476-4fae-821f-cc822c9a1273","Type":"ContainerStarted","Data":"8e62891708d8e44a5ff645e8642fab731128602e82c3ca45e04d06d8835eeb37"} Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.918624 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6" event={"ID":"33f312f2-394b-4ce5-965d-69a464079f55","Type":"ContainerStarted","Data":"5df7b032c6a99e3de23199d0463719adf73934e0ae7cc64b6622badb2c80e8b0"} Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.920253 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf" event={"ID":"cc8e0f09-d68e-4cab-af19-86180b78cb70","Type":"ContainerStarted","Data":"98ce8c33a94758f775a090dddddcf37c14844f50fa7ce88e9ecc1cd8bd0ccfd2"} Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.926241 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-m758l"] Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.929331 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w" event={"ID":"e59f4cc7-3f2d-43a7-91f6-cf589392f5fa","Type":"ContainerStarted","Data":"980599f8ea5c75a9f89cbdf80768a0dde6c03ce8e5c61a90bc2bb2b322eae02e"} Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.932690 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj" event={"ID":"fe92c819-2989-4bb4-8051-6d457ac6b121","Type":"ContainerStarted","Data":"94c43ec6e338af49c273f978ba9985f36dfd8c799b1899e5ea2ed4aed508db2a"} Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.934671 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd" event={"ID":"d877d19f-e34d-429e-8bf7-e7f9c6c141d1","Type":"ContainerStarted","Data":"41161edc6f0e2c51ab4dfebdba985580a9d9431b372f4ad1e6de4067d20ebc86"} Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.940593 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95" event={"ID":"f22b8bf6-a48f-41e3-88f4-601a8befda4b","Type":"ContainerStarted","Data":"a3ff21253edf5b436777172593dee07e09b95008bfee450bb8617888610c4534"} Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.948465 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m"] Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.951236 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2" event={"ID":"c7242ee7-cb17-471e-8639-fd36ccd2d398","Type":"ContainerStarted","Data":"dbe5f70711210abac483f0f465787fe69f7ae10389a225c5ff13ba46afbe4e73"} Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.956314 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" event={"ID":"50a3c2cf-e05f-43ac-833a-1ae097417c9b","Type":"ContainerStarted","Data":"94bba9773b14a390ab5715a3082c7d7fdb5b11226962297eda41a681ff4df4ef"} Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.962875 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf" event={"ID":"26a2d08a-3357-48e4-8fda-e2fbe339e7a8","Type":"ContainerStarted","Data":"66bc8d7c36201d3aa0bc904fa586ac97e68219753e801131cf9f8ba38648213c"} Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.974465 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9"] Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.977821 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb" event={"ID":"31ac0023-c7da-49a3-8276-063e6c7b8a38","Type":"ContainerStarted","Data":"b3e489f800b387d1297ae32b248db8b86fbd3a1692bc1d19adfffee0c690da5d"} Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.982042 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d"] Oct 04 05:03:27 crc kubenswrapper[4802]: E1004 05:03:27.988051 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qbzbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7c7fc454ff-d7g7m_openstack-operators(16c65360-b9da-46ee-807a-7a508bb5b97b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 04 05:03:27 crc kubenswrapper[4802]: W1004 05:03:27.989427 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8975b7de_977f_45a6_b619_1bae2838c9eb.slice/crio-cdfecf361fa72ca966161430245ab42cb751d292576e4d4f566db3558a97d8ad WatchSource:0}: Error finding container cdfecf361fa72ca966161430245ab42cb751d292576e4d4f566db3558a97d8ad: Status 404 returned error can't find the container with id cdfecf361fa72ca966161430245ab42cb751d292576e4d4f566db3558a97d8ad Oct 04 05:03:27 crc kubenswrapper[4802]: W1004 05:03:27.994030 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a7bf534_0e93_4374_9eca_3015e9739b8b.slice/crio-859c112a447d72e15b134cd3059217ac1451d6657f99a4a2f35caf4d34e6503e WatchSource:0}: Error finding container 859c112a447d72e15b134cd3059217ac1451d6657f99a4a2f35caf4d34e6503e: Status 404 returned error can't find the container with id 859c112a447d72e15b134cd3059217ac1451d6657f99a4a2f35caf4d34e6503e Oct 04 05:03:27 crc kubenswrapper[4802]: E1004 05:03:27.994327 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hnrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-658588b8c9-gj2l2_openstack-operators(8975b7de-977f-45a6-b619-1bae2838c9eb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 04 05:03:27 crc kubenswrapper[4802]: I1004 05:03:27.999303 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2"] Oct 04 05:03:28 crc kubenswrapper[4802]: W1004 05:03:28.002672 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod566c2a23_aea4_4ea6_9820_666a22d36d99.slice/crio-b08584539ee969f6a928cecfbb7eefe7c5b480a5bd975b6f961eb042945c4fb4 WatchSource:0}: Error finding container b08584539ee969f6a928cecfbb7eefe7c5b480a5bd975b6f961eb042945c4fb4: Status 404 returned error can't find the container with id b08584539ee969f6a928cecfbb7eefe7c5b480a5bd975b6f961eb042945c4fb4 Oct 04 05:03:28 crc kubenswrapper[4802]: E1004 05:03:28.003933 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vjmc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5d4d74dd89-mfm5d_openstack-operators(2a7bf534-0e93-4374-9eca-3015e9739b8b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 04 05:03:28 crc kubenswrapper[4802]: I1004 05:03:28.008261 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2"] Oct 04 05:03:28 crc kubenswrapper[4802]: E1004 05:03:28.009548 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:64f57b2b59dea2bd9fae91490c5bec2687131884a049e6579819d9f951b877c6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpmk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6cbc6dd547-cjxd2_openstack-operators(566c2a23-aea4-4ea6-9820-666a22d36d99): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 04 05:03:28 crc kubenswrapper[4802]: E1004 05:03:28.010499 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6x69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-xbjts_openstack-operators(f72da0d9-79ad-4717-9b92-f45533584fb7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 04 05:03:28 crc kubenswrapper[4802]: E1004 05:03:28.014336 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts" podUID="f72da0d9-79ad-4717-9b92-f45533584fb7" Oct 04 05:03:28 crc kubenswrapper[4802]: W1004 05:03:28.018084 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd90631a7_c7d2_4e82_a841_21980a76d784.slice/crio-63d20d5044c235c48fc0970ed7f5ed60c3f233e24b6f4b2392dada9f3d7a69e1 WatchSource:0}: Error finding container 63d20d5044c235c48fc0970ed7f5ed60c3f233e24b6f4b2392dada9f3d7a69e1: Status 404 returned error can't find the container with id 63d20d5044c235c48fc0970ed7f5ed60c3f233e24b6f4b2392dada9f3d7a69e1 Oct 04 05:03:28 crc kubenswrapper[4802]: E1004 05:03:28.019017 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhl82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf_openstack-operators(e7c82134-0489-4a84-91b2-de5e9ff651a3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 04 05:03:28 crc kubenswrapper[4802]: I1004 05:03:28.021231 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt"] Oct 04 05:03:28 crc kubenswrapper[4802]: I1004 05:03:28.028903 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts"] Oct 04 05:03:28 crc kubenswrapper[4802]: I1004 05:03:28.040604 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf"] Oct 04 05:03:28 crc kubenswrapper[4802]: E1004 05:03:28.062347 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b72kz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd5cb47d7-pfhnt_openstack-operators(d90631a7-c7d2-4e82-a841-21980a76d784): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 04 05:03:28 crc kubenswrapper[4802]: E1004 05:03:28.343932 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" podUID="8975b7de-977f-45a6-b619-1bae2838c9eb" Oct 04 05:03:28 crc kubenswrapper[4802]: E1004 05:03:28.373547 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" podUID="e7c82134-0489-4a84-91b2-de5e9ff651a3" Oct 04 05:03:28 crc kubenswrapper[4802]: E1004 05:03:28.393360 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" podUID="2a7bf534-0e93-4374-9eca-3015e9739b8b" Oct 04 05:03:28 crc kubenswrapper[4802]: I1004 05:03:28.403127 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66a174f5-be2c-4fb0-92f0-4cb911033d87-cert\") pod \"openstack-operator-controller-manager-6f965f7c8f-wrfgq\" (UID: \"66a174f5-be2c-4fb0-92f0-4cb911033d87\") " pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" Oct 04 05:03:28 crc kubenswrapper[4802]: I1004 05:03:28.412008 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66a174f5-be2c-4fb0-92f0-4cb911033d87-cert\") pod \"openstack-operator-controller-manager-6f965f7c8f-wrfgq\" (UID: \"66a174f5-be2c-4fb0-92f0-4cb911033d87\") " pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" Oct 04 05:03:28 crc kubenswrapper[4802]: E1004 05:03:28.415272 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" podUID="16c65360-b9da-46ee-807a-7a508bb5b97b" Oct 04 05:03:28 crc kubenswrapper[4802]: E1004 05:03:28.433393 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" podUID="d90631a7-c7d2-4e82-a841-21980a76d784" Oct 04 05:03:28 crc kubenswrapper[4802]: E1004 05:03:28.446041 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" podUID="566c2a23-aea4-4ea6-9820-666a22d36d99" Oct 04 05:03:28 crc kubenswrapper[4802]: I1004 05:03:28.472823 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.087894 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" event={"ID":"e7c82134-0489-4a84-91b2-de5e9ff651a3","Type":"ContainerStarted","Data":"f918187893b3c0731dccd1a04a7d032a896a113a77a8ea7dfc4b2c82de45b376"} Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.088290 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" event={"ID":"e7c82134-0489-4a84-91b2-de5e9ff651a3","Type":"ContainerStarted","Data":"21cceaabba4fd0d34594a4573758b682f135500a38d763a1ae8e072be715d5a4"} Oct 04 05:03:29 crc kubenswrapper[4802]: E1004 05:03:29.123022 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" podUID="e7c82134-0489-4a84-91b2-de5e9ff651a3" Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.146727 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9" event={"ID":"da0eb8e5-72e3-4d6c-b896-e337363ed73c","Type":"ContainerStarted","Data":"3b5fb6bdeaa8821770d510206bae9c27b0a4326f2946d27909456a6ce54ea7ff"} Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.157868 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq"] Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.184972 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" event={"ID":"d90631a7-c7d2-4e82-a841-21980a76d784","Type":"ContainerStarted","Data":"af7e3d6f6c766481326bedf7df3f0cf403c364ed9897a510fd51ea6279336a4f"} Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.189799 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" event={"ID":"d90631a7-c7d2-4e82-a841-21980a76d784","Type":"ContainerStarted","Data":"63d20d5044c235c48fc0970ed7f5ed60c3f233e24b6f4b2392dada9f3d7a69e1"} Oct 04 05:03:29 crc kubenswrapper[4802]: E1004 05:03:29.197691 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" podUID="d90631a7-c7d2-4e82-a841-21980a76d784" Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.207737 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" event={"ID":"566c2a23-aea4-4ea6-9820-666a22d36d99","Type":"ContainerStarted","Data":"be8a5e428437bb31bf74bb949c9ab362d91bde47d1f8e432522d11481038a7c7"} Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.207786 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" event={"ID":"566c2a23-aea4-4ea6-9820-666a22d36d99","Type":"ContainerStarted","Data":"b08584539ee969f6a928cecfbb7eefe7c5b480a5bd975b6f961eb042945c4fb4"} Oct 04 05:03:29 crc kubenswrapper[4802]: E1004 05:03:29.217541 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:64f57b2b59dea2bd9fae91490c5bec2687131884a049e6579819d9f951b877c6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" podUID="566c2a23-aea4-4ea6-9820-666a22d36d99" Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.219560 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" event={"ID":"16c65360-b9da-46ee-807a-7a508bb5b97b","Type":"ContainerStarted","Data":"429c008b6bb705d07eaf8fc9216e6bb67a211d4aa71e0d16c0a0e53225540f21"} Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.219651 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" event={"ID":"16c65360-b9da-46ee-807a-7a508bb5b97b","Type":"ContainerStarted","Data":"30a68b30ed6d616a871ca08fd97f1ae2704899f5bf24cebd6aaef7e930da4632"} Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.221077 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts" event={"ID":"f72da0d9-79ad-4717-9b92-f45533584fb7","Type":"ContainerStarted","Data":"b6ab72551b976883466276eb81336fb8e6dbe1b0b49acd601ca6f9b1f36b1489"} Oct 04 05:03:29 crc kubenswrapper[4802]: E1004 05:03:29.221599 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" podUID="16c65360-b9da-46ee-807a-7a508bb5b97b" Oct 04 05:03:29 crc kubenswrapper[4802]: E1004 05:03:29.228457 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts" podUID="f72da0d9-79ad-4717-9b92-f45533584fb7" Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.229182 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-m758l" event={"ID":"dd3b5879-0e5a-41f9-ab00-16fc063260cc","Type":"ContainerStarted","Data":"6824a8834c7492a2e398bc0219493737bfeb4dbe1cc312c33fd06d2a12abafc0"} Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.257369 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" event={"ID":"8975b7de-977f-45a6-b619-1bae2838c9eb","Type":"ContainerStarted","Data":"a48cb340ce876b2e5cb47928864d7ee950e4a582c20e1b86badb5609c210e59e"} Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.257848 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" event={"ID":"8975b7de-977f-45a6-b619-1bae2838c9eb","Type":"ContainerStarted","Data":"cdfecf361fa72ca966161430245ab42cb751d292576e4d4f566db3558a97d8ad"} Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.279039 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" event={"ID":"2a7bf534-0e93-4374-9eca-3015e9739b8b","Type":"ContainerStarted","Data":"490e79d6fc6ea252655ab2084941a7f88e89d080b5b559d7e0565326c475e47a"} Oct 04 05:03:29 crc kubenswrapper[4802]: I1004 05:03:29.279095 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" event={"ID":"2a7bf534-0e93-4374-9eca-3015e9739b8b","Type":"ContainerStarted","Data":"859c112a447d72e15b134cd3059217ac1451d6657f99a4a2f35caf4d34e6503e"} Oct 04 05:03:29 crc kubenswrapper[4802]: E1004 05:03:29.293530 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f\\\"\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" podUID="8975b7de-977f-45a6-b619-1bae2838c9eb" Oct 04 05:03:29 crc kubenswrapper[4802]: E1004 05:03:29.293885 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" podUID="2a7bf534-0e93-4374-9eca-3015e9739b8b" Oct 04 05:03:30 crc kubenswrapper[4802]: I1004 05:03:30.300032 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" event={"ID":"66a174f5-be2c-4fb0-92f0-4cb911033d87","Type":"ContainerStarted","Data":"fb2a5fe5cbbc1a470dbe7b642ce6684bc25d586805d787aa3c67183cac7d8601"} Oct 04 05:03:30 crc kubenswrapper[4802]: I1004 05:03:30.300759 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" event={"ID":"66a174f5-be2c-4fb0-92f0-4cb911033d87","Type":"ContainerStarted","Data":"eb9e7e5e51ad43f4d0d416404fbe605ae744571bd30f6d089a13dc40a9577fa7"} Oct 04 05:03:30 crc kubenswrapper[4802]: I1004 05:03:30.300782 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" event={"ID":"66a174f5-be2c-4fb0-92f0-4cb911033d87","Type":"ContainerStarted","Data":"45e440330bdacdb6062bfa3886177f6114eac157c3ae47e93f1d777f5f1b130e"} Oct 04 05:03:30 crc kubenswrapper[4802]: E1004 05:03:30.304005 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" podUID="16c65360-b9da-46ee-807a-7a508bb5b97b" Oct 04 05:03:30 crc kubenswrapper[4802]: E1004 05:03:30.304109 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts" podUID="f72da0d9-79ad-4717-9b92-f45533584fb7" Oct 04 05:03:30 crc kubenswrapper[4802]: E1004 05:03:30.304113 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:64f57b2b59dea2bd9fae91490c5bec2687131884a049e6579819d9f951b877c6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" podUID="566c2a23-aea4-4ea6-9820-666a22d36d99" Oct 04 05:03:30 crc kubenswrapper[4802]: E1004 05:03:30.304137 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" podUID="2a7bf534-0e93-4374-9eca-3015e9739b8b" Oct 04 05:03:30 crc kubenswrapper[4802]: E1004 05:03:30.308712 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" podUID="e7c82134-0489-4a84-91b2-de5e9ff651a3" Oct 04 05:03:30 crc kubenswrapper[4802]: E1004 05:03:30.308802 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f\\\"\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" podUID="8975b7de-977f-45a6-b619-1bae2838c9eb" Oct 04 05:03:30 crc kubenswrapper[4802]: E1004 05:03:30.309239 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" podUID="d90631a7-c7d2-4e82-a841-21980a76d784" Oct 04 05:03:30 crc kubenswrapper[4802]: I1004 05:03:30.387105 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" podStartSLOduration=4.387079597 podStartE2EDuration="4.387079597s" podCreationTimestamp="2025-10-04 05:03:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:03:30.372884389 +0000 UTC m=+1052.780885014" watchObservedRunningTime="2025-10-04 05:03:30.387079597 +0000 UTC m=+1052.795080232" Oct 04 05:03:31 crc kubenswrapper[4802]: I1004 05:03:31.307088 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" Oct 04 05:03:38 crc kubenswrapper[4802]: I1004 05:03:38.481267 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6f965f7c8f-wrfgq" Oct 04 05:03:39 crc kubenswrapper[4802]: E1004 05:03:39.730435 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.148:5001/openstack-k8s-operators/cinder-operator:ff44ba56ab21837921a2f7b5a7ead09b289b699f" Oct 04 05:03:39 crc kubenswrapper[4802]: E1004 05:03:39.730482 4802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.148:5001/openstack-k8s-operators/cinder-operator:ff44ba56ab21837921a2f7b5a7ead09b289b699f" Oct 04 05:03:39 crc kubenswrapper[4802]: E1004 05:03:39.730626 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.148:5001/openstack-k8s-operators/cinder-operator:ff44ba56ab21837921a2f7b5a7ead09b289b699f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-47vp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-7654479b5b-qdnwx_openstack-operators(50a3c2cf-e05f-43ac-833a-1ae097417c9b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 04 05:03:40 crc kubenswrapper[4802]: E1004 05:03:40.171501 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" podUID="50a3c2cf-e05f-43ac-833a-1ae097417c9b" Oct 04 05:03:40 crc kubenswrapper[4802]: I1004 05:03:40.382619 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" event={"ID":"50a3c2cf-e05f-43ac-833a-1ae097417c9b","Type":"ContainerStarted","Data":"2863dbbc27cb512c3ed410558c29c51bd3ce9bc398f1f814770ecc5e3b9d4d41"} Oct 04 05:03:40 crc kubenswrapper[4802]: E1004 05:03:40.384863 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.148:5001/openstack-k8s-operators/cinder-operator:ff44ba56ab21837921a2f7b5a7ead09b289b699f\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" podUID="50a3c2cf-e05f-43ac-833a-1ae097417c9b" Oct 04 05:03:40 crc kubenswrapper[4802]: I1004 05:03:40.386020 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7" event={"ID":"9f060f22-b0a6-41e0-b88e-3c4411b06f1d","Type":"ContainerStarted","Data":"0c4d75c49134c6f0dc30d9a5c5a45234ac991c6799183b31b6f0b6a99dcb4ab1"} Oct 04 05:03:40 crc kubenswrapper[4802]: I1004 05:03:40.397389 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9" event={"ID":"da0eb8e5-72e3-4d6c-b896-e337363ed73c","Type":"ContainerStarted","Data":"87f83cce63b0d84a63202e302a4ca5db972571a2eaafb565941ae0900dee52d9"} Oct 04 05:03:40 crc kubenswrapper[4802]: I1004 05:03:40.405898 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95" event={"ID":"f22b8bf6-a48f-41e3-88f4-601a8befda4b","Type":"ContainerStarted","Data":"17414499dc5956157554b425c8c3f6aba739613672c45028061a22c8ba2361de"} Oct 04 05:03:40 crc kubenswrapper[4802]: I1004 05:03:40.408633 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w" event={"ID":"e59f4cc7-3f2d-43a7-91f6-cf589392f5fa","Type":"ContainerStarted","Data":"e1d4ecbd087e972106422f19987dc995dbb2fc7aeb1da47cc92894724cdc8374"} Oct 04 05:03:40 crc kubenswrapper[4802]: I1004 05:03:40.423120 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd" event={"ID":"d877d19f-e34d-429e-8bf7-e7f9c6c141d1","Type":"ContainerStarted","Data":"878476998be54456509b6660ef50ef237b6b8d9a4f9b0d48bc4836839381c441"} Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.472015 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-m758l" event={"ID":"dd3b5879-0e5a-41f9-ab00-16fc063260cc","Type":"ContainerStarted","Data":"865173e75191c5aa5fc76cba306a6c331f6e61b0cad9120057ea0f9b014e74a2"} Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.496909 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc" event={"ID":"d1ae58bd-7435-4bb1-819c-2f085a231ce0","Type":"ContainerStarted","Data":"213834017c88a474ddf8fca6203daced8fed7dbab9e9df6244b65c4d9cae09c3"} Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.524073 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7" event={"ID":"9f060f22-b0a6-41e0-b88e-3c4411b06f1d","Type":"ContainerStarted","Data":"9fa87ffbd8bff3757fc887ada88c22ae65859f3b34e4e8a6b5e0494a4ed8e007"} Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.524596 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7" Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.525967 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb" event={"ID":"31ac0023-c7da-49a3-8276-063e6c7b8a38","Type":"ContainerStarted","Data":"6987f62c3799329e92535f8b33cf957380cf9e9ddb318b31028b7914b9637bf2"} Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.527135 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95" event={"ID":"f22b8bf6-a48f-41e3-88f4-601a8befda4b","Type":"ContainerStarted","Data":"da6ca5174f3dcac3e6d92b0c34cef2037f5a1c9ab92d4c32caa351e9e69de596"} Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.527885 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95" Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.529227 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6" event={"ID":"33f312f2-394b-4ce5-965d-69a464079f55","Type":"ContainerStarted","Data":"852c28f2e4b21c4385d960bc490dcc812d01b36b7504fe9cfe46cd5afe469dca"} Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.530598 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf" event={"ID":"26a2d08a-3357-48e4-8fda-e2fbe339e7a8","Type":"ContainerStarted","Data":"32fae861305361b7a44b4fe745f03d7ad6c10c2f374c9892ea798589c4719175"} Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.533227 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf" event={"ID":"e8f690af-8476-4fae-821f-cc822c9a1273","Type":"ContainerStarted","Data":"975040d41c79ea56f3ebf568b8b55ab0f4ffe24f2bf162675b33a1345ee94d25"} Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.573248 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd" event={"ID":"d877d19f-e34d-429e-8bf7-e7f9c6c141d1","Type":"ContainerStarted","Data":"82aeb6eea8a10b32226ce2480c74b9074f68ddd7148011da11b2f01caeddb33a"} Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.574021 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd" Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.584189 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2" event={"ID":"c7242ee7-cb17-471e-8639-fd36ccd2d398","Type":"ContainerStarted","Data":"6c4ea2ca8c575936a6a82da1c73da3d8fccd090758024e6aef062bcea5d5ca23"} Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.598003 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7" podStartSLOduration=3.992069801 podStartE2EDuration="16.597980571s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.120027992 +0000 UTC m=+1049.528028617" lastFinishedPulling="2025-10-04 05:03:39.725938752 +0000 UTC m=+1062.133939387" observedRunningTime="2025-10-04 05:03:41.595301554 +0000 UTC m=+1064.003302189" watchObservedRunningTime="2025-10-04 05:03:41.597980571 +0000 UTC m=+1064.005981206" Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.623752 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9" event={"ID":"da0eb8e5-72e3-4d6c-b896-e337363ed73c","Type":"ContainerStarted","Data":"eea258a1d541cceae6437ec39d16fa5283a725e1565a44ad8f0f3bc1c11d01c3"} Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.627124 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd" podStartSLOduration=4.398056059 podStartE2EDuration="16.627104968s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.548147117 +0000 UTC m=+1049.956147742" lastFinishedPulling="2025-10-04 05:03:39.777196026 +0000 UTC m=+1062.185196651" observedRunningTime="2025-10-04 05:03:41.624459072 +0000 UTC m=+1064.032459697" watchObservedRunningTime="2025-10-04 05:03:41.627104968 +0000 UTC m=+1064.035105593" Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.627743 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9" Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.631879 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf" event={"ID":"cc8e0f09-d68e-4cab-af19-86180b78cb70","Type":"ContainerStarted","Data":"efdc8855435878566964a4e14908368f99afe182f3a1e61784d663dcdc89da49"} Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.651302 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj" event={"ID":"fe92c819-2989-4bb4-8051-6d457ac6b121","Type":"ContainerStarted","Data":"e5db5037438c012f5bf498619b147e295c762221f056ef35701c399e481647d6"} Oct 04 05:03:41 crc kubenswrapper[4802]: E1004 05:03:41.657577 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.148:5001/openstack-k8s-operators/cinder-operator:ff44ba56ab21837921a2f7b5a7ead09b289b699f\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" podUID="50a3c2cf-e05f-43ac-833a-1ae097417c9b" Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.677405 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95" podStartSLOduration=4.498186607 podStartE2EDuration="16.677385393s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.545750978 +0000 UTC m=+1049.953751603" lastFinishedPulling="2025-10-04 05:03:39.724949754 +0000 UTC m=+1062.132950389" observedRunningTime="2025-10-04 05:03:41.649583784 +0000 UTC m=+1064.057584409" watchObservedRunningTime="2025-10-04 05:03:41.677385393 +0000 UTC m=+1064.085386018" Oct 04 05:03:41 crc kubenswrapper[4802]: I1004 05:03:41.728337 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9" podStartSLOduration=4.986557124 podStartE2EDuration="16.728314607s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.983215612 +0000 UTC m=+1050.391216237" lastFinishedPulling="2025-10-04 05:03:39.724973075 +0000 UTC m=+1062.132973720" observedRunningTime="2025-10-04 05:03:41.696550704 +0000 UTC m=+1064.104551329" watchObservedRunningTime="2025-10-04 05:03:41.728314607 +0000 UTC m=+1064.136315222" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.662017 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb" event={"ID":"31ac0023-c7da-49a3-8276-063e6c7b8a38","Type":"ContainerStarted","Data":"2b622aa84ec8ac23f6bb56b1b1f5217697588cfb8dd7bdb7565561ef809273ab"} Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.663589 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.665770 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj" event={"ID":"fe92c819-2989-4bb4-8051-6d457ac6b121","Type":"ContainerStarted","Data":"297ff310df66e673454f0516351370b0dec8c4ebb2295c565d6bab0e6e24331c"} Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.666519 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.668719 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf" event={"ID":"26a2d08a-3357-48e4-8fda-e2fbe339e7a8","Type":"ContainerStarted","Data":"f69203db2b6e25f18d82c99d3a4b990170cf11a33e41b0fa03f7a5e83cfaca1d"} Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.669234 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.671472 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf" event={"ID":"e8f690af-8476-4fae-821f-cc822c9a1273","Type":"ContainerStarted","Data":"c137d00700d7478d23314d94132d08cd5e0ff2928ec07929dc7a75c8bb6e0285"} Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.671995 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.675540 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc" event={"ID":"d1ae58bd-7435-4bb1-819c-2f085a231ce0","Type":"ContainerStarted","Data":"3900ee94c0e2bd0e7b7e00da5dbd69d6b007d5e51fc87244bf63bed4122b9221"} Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.675762 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.677426 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2" event={"ID":"c7242ee7-cb17-471e-8639-fd36ccd2d398","Type":"ContainerStarted","Data":"d839360f7d16ffd5e525d9e605fe0a4a11a23e403099f4d0731ebdceeb238ff5"} Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.678566 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.681514 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6" event={"ID":"33f312f2-394b-4ce5-965d-69a464079f55","Type":"ContainerStarted","Data":"a5a64bb5ce4661a144471ea8ea0e12cc84f3ae62d70ba362fe2e03956681e55d"} Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.681946 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.685260 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb" podStartSLOduration=5.392552903 podStartE2EDuration="17.685236771s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.519804062 +0000 UTC m=+1049.927804687" lastFinishedPulling="2025-10-04 05:03:39.81248793 +0000 UTC m=+1062.220488555" observedRunningTime="2025-10-04 05:03:42.681531954 +0000 UTC m=+1065.089532579" watchObservedRunningTime="2025-10-04 05:03:42.685236771 +0000 UTC m=+1065.093237396" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.693557 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w" event={"ID":"e59f4cc7-3f2d-43a7-91f6-cf589392f5fa","Type":"ContainerStarted","Data":"5beb52518573b55d3a5a5b30d40943cdb8d630da9193eae64590a69ac01cf67a"} Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.694208 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.704196 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf" podStartSLOduration=5.469049432 podStartE2EDuration="17.704184036s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.553538662 +0000 UTC m=+1049.961539287" lastFinishedPulling="2025-10-04 05:03:39.788673256 +0000 UTC m=+1062.196673891" observedRunningTime="2025-10-04 05:03:42.701498538 +0000 UTC m=+1065.109499163" watchObservedRunningTime="2025-10-04 05:03:42.704184036 +0000 UTC m=+1065.112184661" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.718521 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf" event={"ID":"cc8e0f09-d68e-4cab-af19-86180b78cb70","Type":"ContainerStarted","Data":"827662f6cd1e7f0d62f32fa96cb283a102ebbb642a4aa5b1207a3ea0da10f91c"} Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.718728 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.721394 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc" podStartSLOduration=5.407611096 podStartE2EDuration="17.72138229s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.501864156 +0000 UTC m=+1049.909864781" lastFinishedPulling="2025-10-04 05:03:39.81563535 +0000 UTC m=+1062.223635975" observedRunningTime="2025-10-04 05:03:42.720940647 +0000 UTC m=+1065.128941282" watchObservedRunningTime="2025-10-04 05:03:42.72138229 +0000 UTC m=+1065.129382915" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.726456 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-m758l" event={"ID":"dd3b5879-0e5a-41f9-ab00-16fc063260cc","Type":"ContainerStarted","Data":"d591b582e0ffeff96944198af7116d418a9c0f0428b2f182127df2aac4cdffda"} Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.727332 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-m758l" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.744452 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2" podStartSLOduration=5.6293881500000005 podStartE2EDuration="17.744423742s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.673533251 +0000 UTC m=+1050.081533876" lastFinishedPulling="2025-10-04 05:03:39.788568833 +0000 UTC m=+1062.196569468" observedRunningTime="2025-10-04 05:03:42.738628956 +0000 UTC m=+1065.146629581" watchObservedRunningTime="2025-10-04 05:03:42.744423742 +0000 UTC m=+1065.152424367" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.783551 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj" podStartSLOduration=4.926348373 podStartE2EDuration="17.783532026s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:26.931214085 +0000 UTC m=+1049.339214710" lastFinishedPulling="2025-10-04 05:03:39.788397738 +0000 UTC m=+1062.196398363" observedRunningTime="2025-10-04 05:03:42.782941909 +0000 UTC m=+1065.190942534" watchObservedRunningTime="2025-10-04 05:03:42.783532026 +0000 UTC m=+1065.191532651" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.792776 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf" podStartSLOduration=5.672559391 podStartE2EDuration="17.792753431s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.668310581 +0000 UTC m=+1050.076311206" lastFinishedPulling="2025-10-04 05:03:39.788504621 +0000 UTC m=+1062.196505246" observedRunningTime="2025-10-04 05:03:42.757630602 +0000 UTC m=+1065.165631227" watchObservedRunningTime="2025-10-04 05:03:42.792753431 +0000 UTC m=+1065.200754056" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.807406 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf" podStartSLOduration=5.158895868 podStartE2EDuration="17.807387112s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.112576758 +0000 UTC m=+1049.520577373" lastFinishedPulling="2025-10-04 05:03:39.761067962 +0000 UTC m=+1062.169068617" observedRunningTime="2025-10-04 05:03:42.800394541 +0000 UTC m=+1065.208395156" watchObservedRunningTime="2025-10-04 05:03:42.807387112 +0000 UTC m=+1065.215387737" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.827864 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6" podStartSLOduration=5.637057191 podStartE2EDuration="17.82783165s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.561090929 +0000 UTC m=+1049.969091554" lastFinishedPulling="2025-10-04 05:03:39.751865368 +0000 UTC m=+1062.159866013" observedRunningTime="2025-10-04 05:03:42.821431586 +0000 UTC m=+1065.229432211" watchObservedRunningTime="2025-10-04 05:03:42.82783165 +0000 UTC m=+1065.235832275" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.840192 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-m758l" podStartSLOduration=6.062053757 podStartE2EDuration="17.840163984s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.982916644 +0000 UTC m=+1050.390917269" lastFinishedPulling="2025-10-04 05:03:39.761026861 +0000 UTC m=+1062.169027496" observedRunningTime="2025-10-04 05:03:42.836120238 +0000 UTC m=+1065.244120873" watchObservedRunningTime="2025-10-04 05:03:42.840163984 +0000 UTC m=+1065.248164609" Oct 04 05:03:42 crc kubenswrapper[4802]: I1004 05:03:42.858284 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w" podStartSLOduration=5.373925578 podStartE2EDuration="17.858257274s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.241826583 +0000 UTC m=+1049.649827208" lastFinishedPulling="2025-10-04 05:03:39.726158269 +0000 UTC m=+1062.134158904" observedRunningTime="2025-10-04 05:03:42.85532229 +0000 UTC m=+1065.263322915" watchObservedRunningTime="2025-10-04 05:03:42.858257274 +0000 UTC m=+1065.266257899" Oct 04 05:03:45 crc kubenswrapper[4802]: I1004 05:03:45.740970 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-tbhkj" Oct 04 05:03:45 crc kubenswrapper[4802]: I1004 05:03:45.765115 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts" event={"ID":"f72da0d9-79ad-4717-9b92-f45533584fb7","Type":"ContainerStarted","Data":"d56281d50722b3bcbd9710876852ce7b7e10ad2112746068d2d1534d7b7c8cbc"} Oct 04 05:03:45 crc kubenswrapper[4802]: I1004 05:03:45.774554 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" event={"ID":"2a7bf534-0e93-4374-9eca-3015e9739b8b","Type":"ContainerStarted","Data":"2dfe6a8d79b0c06c4d1fe246d100cb08504731124730887e9971d8f38e5fd2e8"} Oct 04 05:03:45 crc kubenswrapper[4802]: I1004 05:03:45.775488 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" Oct 04 05:03:45 crc kubenswrapper[4802]: I1004 05:03:45.789123 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5568b5d68-g9stf" Oct 04 05:03:45 crc kubenswrapper[4802]: I1004 05:03:45.805682 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-xbjts" podStartSLOduration=2.782545087 podStartE2EDuration="19.805661231s" podCreationTimestamp="2025-10-04 05:03:26 +0000 UTC" firstStartedPulling="2025-10-04 05:03:28.010232129 +0000 UTC m=+1050.418232754" lastFinishedPulling="2025-10-04 05:03:45.033348273 +0000 UTC m=+1067.441348898" observedRunningTime="2025-10-04 05:03:45.802015276 +0000 UTC m=+1068.210015901" watchObservedRunningTime="2025-10-04 05:03:45.805661231 +0000 UTC m=+1068.213661856" Oct 04 05:03:45 crc kubenswrapper[4802]: I1004 05:03:45.816818 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-8f58bc9db-bczqd" Oct 04 05:03:45 crc kubenswrapper[4802]: I1004 05:03:45.817832 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" event={"ID":"d90631a7-c7d2-4e82-a841-21980a76d784","Type":"ContainerStarted","Data":"1577e9051037b57ae914e1a51e88cf77b3130a669d59d282b651e5db639d78e8"} Oct 04 05:03:45 crc kubenswrapper[4802]: I1004 05:03:45.818300 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" Oct 04 05:03:45 crc kubenswrapper[4802]: I1004 05:03:45.857776 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-54876c876f-c2wf7" Oct 04 05:03:45 crc kubenswrapper[4802]: I1004 05:03:45.863366 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" podStartSLOduration=3.803467858 podStartE2EDuration="20.863343259s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:28.003743732 +0000 UTC m=+1050.411744357" lastFinishedPulling="2025-10-04 05:03:45.063619133 +0000 UTC m=+1067.471619758" observedRunningTime="2025-10-04 05:03:45.859061696 +0000 UTC m=+1068.267062331" watchObservedRunningTime="2025-10-04 05:03:45.863343259 +0000 UTC m=+1068.271343894" Oct 04 05:03:45 crc kubenswrapper[4802]: I1004 05:03:45.888093 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" podStartSLOduration=2.916879438 podStartE2EDuration="19.88807247s" podCreationTimestamp="2025-10-04 05:03:26 +0000 UTC" firstStartedPulling="2025-10-04 05:03:28.062130121 +0000 UTC m=+1050.470130756" lastFinishedPulling="2025-10-04 05:03:45.033323153 +0000 UTC m=+1067.441323788" observedRunningTime="2025-10-04 05:03:45.887464912 +0000 UTC m=+1068.295465537" watchObservedRunningTime="2025-10-04 05:03:45.88807247 +0000 UTC m=+1068.296073095" Oct 04 05:03:46 crc kubenswrapper[4802]: I1004 05:03:46.012546 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-5f7c849b98-qwpjf" Oct 04 05:03:46 crc kubenswrapper[4802]: I1004 05:03:46.204718 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-xqp95" Oct 04 05:03:46 crc kubenswrapper[4802]: I1004 05:03:46.219548 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-699b87f775-sgw2w" Oct 04 05:03:46 crc kubenswrapper[4802]: I1004 05:03:46.250871 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-dhhv6" Oct 04 05:03:46 crc kubenswrapper[4802]: I1004 05:03:46.251999 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-655d88ccb9-6zzzc" Oct 04 05:03:46 crc kubenswrapper[4802]: I1004 05:03:46.267846 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-6pkx2" Oct 04 05:03:46 crc kubenswrapper[4802]: I1004 05:03:46.342266 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-xc8rb" Oct 04 05:03:46 crc kubenswrapper[4802]: I1004 05:03:46.446794 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-579449c7d5-khgqf" Oct 04 05:03:46 crc kubenswrapper[4802]: I1004 05:03:46.528857 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-dhxm9" Oct 04 05:03:46 crc kubenswrapper[4802]: I1004 05:03:46.629767 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-m758l" Oct 04 05:03:49 crc kubenswrapper[4802]: I1004 05:03:49.862469 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" event={"ID":"16c65360-b9da-46ee-807a-7a508bb5b97b","Type":"ContainerStarted","Data":"250a16618b95cb59102fe52ab3f7598eb93866d6fd747408a9477b2d67821da6"} Oct 04 05:03:49 crc kubenswrapper[4802]: I1004 05:03:49.863041 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" Oct 04 05:03:49 crc kubenswrapper[4802]: I1004 05:03:49.865161 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" event={"ID":"8975b7de-977f-45a6-b619-1bae2838c9eb","Type":"ContainerStarted","Data":"429f9fdc2733ce49f67510420fd47a9648e6bae9b03551b028b8e36e96cb91a4"} Oct 04 05:03:49 crc kubenswrapper[4802]: I1004 05:03:49.865559 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" Oct 04 05:03:49 crc kubenswrapper[4802]: I1004 05:03:49.868322 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" event={"ID":"e7c82134-0489-4a84-91b2-de5e9ff651a3","Type":"ContainerStarted","Data":"113b89f4341b7687c7caaf4b0b2e8adfae2300ef2d6ceea1b4236b3eb707b3dd"} Oct 04 05:03:49 crc kubenswrapper[4802]: I1004 05:03:49.868604 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" Oct 04 05:03:49 crc kubenswrapper[4802]: I1004 05:03:49.872444 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" event={"ID":"566c2a23-aea4-4ea6-9820-666a22d36d99","Type":"ContainerStarted","Data":"a6d5e87aacc56117fbe9918bd560a354106e6fc2985173b9b8f63acf44120ead"} Oct 04 05:03:49 crc kubenswrapper[4802]: I1004 05:03:49.872740 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" Oct 04 05:03:49 crc kubenswrapper[4802]: I1004 05:03:49.896768 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" podStartSLOduration=3.3753899450000002 podStartE2EDuration="24.896745591s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.987867386 +0000 UTC m=+1050.395868011" lastFinishedPulling="2025-10-04 05:03:49.509223032 +0000 UTC m=+1071.917223657" observedRunningTime="2025-10-04 05:03:49.893852598 +0000 UTC m=+1072.301853243" watchObservedRunningTime="2025-10-04 05:03:49.896745591 +0000 UTC m=+1072.304746226" Oct 04 05:03:49 crc kubenswrapper[4802]: I1004 05:03:49.914932 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" podStartSLOduration=2.40445285 podStartE2EDuration="23.914911023s" podCreationTimestamp="2025-10-04 05:03:26 +0000 UTC" firstStartedPulling="2025-10-04 05:03:28.009391655 +0000 UTC m=+1050.417392280" lastFinishedPulling="2025-10-04 05:03:49.519849828 +0000 UTC m=+1071.927850453" observedRunningTime="2025-10-04 05:03:49.910531657 +0000 UTC m=+1072.318532282" watchObservedRunningTime="2025-10-04 05:03:49.914911023 +0000 UTC m=+1072.322911648" Oct 04 05:03:49 crc kubenswrapper[4802]: I1004 05:03:49.951592 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" podStartSLOduration=3.454221861 podStartE2EDuration="24.951567097s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:28.018147056 +0000 UTC m=+1050.426147681" lastFinishedPulling="2025-10-04 05:03:49.515492302 +0000 UTC m=+1071.923492917" observedRunningTime="2025-10-04 05:03:49.945835712 +0000 UTC m=+1072.353836337" watchObservedRunningTime="2025-10-04 05:03:49.951567097 +0000 UTC m=+1072.359567722" Oct 04 05:03:49 crc kubenswrapper[4802]: I1004 05:03:49.974053 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" podStartSLOduration=3.458996727 podStartE2EDuration="24.974033262s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.994164587 +0000 UTC m=+1050.402165222" lastFinishedPulling="2025-10-04 05:03:49.509201132 +0000 UTC m=+1071.917201757" observedRunningTime="2025-10-04 05:03:49.972468527 +0000 UTC m=+1072.380469152" watchObservedRunningTime="2025-10-04 05:03:49.974033262 +0000 UTC m=+1072.382033887" Oct 04 05:03:52 crc kubenswrapper[4802]: I1004 05:03:52.663499 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:03:52 crc kubenswrapper[4802]: I1004 05:03:52.663966 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:03:56 crc kubenswrapper[4802]: I1004 05:03:56.299494 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-d7g7m" Oct 04 05:03:56 crc kubenswrapper[4802]: I1004 05:03:56.362350 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 05:03:56 crc kubenswrapper[4802]: I1004 05:03:56.523778 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-mfm5d" Oct 04 05:03:56 crc kubenswrapper[4802]: I1004 05:03:56.682321 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf" Oct 04 05:03:56 crc kubenswrapper[4802]: I1004 05:03:56.887168 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-pfhnt" Oct 04 05:03:56 crc kubenswrapper[4802]: I1004 05:03:56.915745 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-cjxd2" Oct 04 05:03:56 crc kubenswrapper[4802]: I1004 05:03:56.930867 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" event={"ID":"50a3c2cf-e05f-43ac-833a-1ae097417c9b","Type":"ContainerStarted","Data":"fff99c86f891ce95e8f150b3e348aad660ca293bf5aa436db3f5720e5bbabfcf"} Oct 04 05:03:56 crc kubenswrapper[4802]: I1004 05:03:56.931775 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" Oct 04 05:03:56 crc kubenswrapper[4802]: I1004 05:03:56.960507 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" podStartSLOduration=3.041548438 podStartE2EDuration="31.960482833s" podCreationTimestamp="2025-10-04 05:03:25 +0000 UTC" firstStartedPulling="2025-10-04 05:03:27.545433219 +0000 UTC m=+1049.953433844" lastFinishedPulling="2025-10-04 05:03:56.464367614 +0000 UTC m=+1078.872368239" observedRunningTime="2025-10-04 05:03:56.95895589 +0000 UTC m=+1079.366956525" watchObservedRunningTime="2025-10-04 05:03:56.960482833 +0000 UTC m=+1079.368483458" Oct 04 05:03:57 crc kubenswrapper[4802]: I1004 05:03:57.338238 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-gj2l2" Oct 04 05:04:06 crc kubenswrapper[4802]: I1004 05:04:06.025200 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7654479b5b-qdnwx" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.662297 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.662921 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.662985 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.663715 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be240c6f7c9da0768b330ef7604de12df37604afd0ee9a212f9d7f4a15105260"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.663764 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://be240c6f7c9da0768b330ef7604de12df37604afd0ee9a212f9d7f4a15105260" gracePeriod=600 Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.698923 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-n5s7s"] Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.700745 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-n5s7s" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.707377 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.707707 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.707909 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-zlg2r" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.708103 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.713425 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-n5s7s"] Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.838326 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68-config\") pod \"dnsmasq-dns-675f4bcbfc-n5s7s\" (UID: \"d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68\") " pod="openstack/dnsmasq-dns-675f4bcbfc-n5s7s" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.838475 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tltqj\" (UniqueName: \"kubernetes.io/projected/d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68-kube-api-access-tltqj\") pod \"dnsmasq-dns-675f4bcbfc-n5s7s\" (UID: \"d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68\") " pod="openstack/dnsmasq-dns-675f4bcbfc-n5s7s" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.916357 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9gv72"] Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.937361 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.939526 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tltqj\" (UniqueName: \"kubernetes.io/projected/d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68-kube-api-access-tltqj\") pod \"dnsmasq-dns-675f4bcbfc-n5s7s\" (UID: \"d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68\") " pod="openstack/dnsmasq-dns-675f4bcbfc-n5s7s" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.939605 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68-config\") pod \"dnsmasq-dns-675f4bcbfc-n5s7s\" (UID: \"d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68\") " pod="openstack/dnsmasq-dns-675f4bcbfc-n5s7s" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.940830 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68-config\") pod \"dnsmasq-dns-675f4bcbfc-n5s7s\" (UID: \"d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68\") " pod="openstack/dnsmasq-dns-675f4bcbfc-n5s7s" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.941369 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.943359 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9gv72"] Oct 04 05:04:22 crc kubenswrapper[4802]: I1004 05:04:22.997166 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tltqj\" (UniqueName: \"kubernetes.io/projected/d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68-kube-api-access-tltqj\") pod \"dnsmasq-dns-675f4bcbfc-n5s7s\" (UID: \"d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68\") " pod="openstack/dnsmasq-dns-675f4bcbfc-n5s7s" Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.029033 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-n5s7s" Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.041192 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebd4357-cebb-4179-ae60-76f62f7df78a-config\") pod \"dnsmasq-dns-78dd6ddcc-9gv72\" (UID: \"8ebd4357-cebb-4179-ae60-76f62f7df78a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.041285 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebd4357-cebb-4179-ae60-76f62f7df78a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9gv72\" (UID: \"8ebd4357-cebb-4179-ae60-76f62f7df78a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.041329 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pvmh\" (UniqueName: \"kubernetes.io/projected/8ebd4357-cebb-4179-ae60-76f62f7df78a-kube-api-access-4pvmh\") pod \"dnsmasq-dns-78dd6ddcc-9gv72\" (UID: \"8ebd4357-cebb-4179-ae60-76f62f7df78a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.136485 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="be240c6f7c9da0768b330ef7604de12df37604afd0ee9a212f9d7f4a15105260" exitCode=0 Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.136564 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"be240c6f7c9da0768b330ef7604de12df37604afd0ee9a212f9d7f4a15105260"} Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.136920 4802 scope.go:117] "RemoveContainer" containerID="4bf3a67a3aced7a776f95ac83df345cb7b69786ce2cff835d8681589db22e4b4" Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.142763 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebd4357-cebb-4179-ae60-76f62f7df78a-config\") pod \"dnsmasq-dns-78dd6ddcc-9gv72\" (UID: \"8ebd4357-cebb-4179-ae60-76f62f7df78a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.143086 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebd4357-cebb-4179-ae60-76f62f7df78a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9gv72\" (UID: \"8ebd4357-cebb-4179-ae60-76f62f7df78a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.143262 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pvmh\" (UniqueName: \"kubernetes.io/projected/8ebd4357-cebb-4179-ae60-76f62f7df78a-kube-api-access-4pvmh\") pod \"dnsmasq-dns-78dd6ddcc-9gv72\" (UID: \"8ebd4357-cebb-4179-ae60-76f62f7df78a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.143723 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebd4357-cebb-4179-ae60-76f62f7df78a-config\") pod \"dnsmasq-dns-78dd6ddcc-9gv72\" (UID: \"8ebd4357-cebb-4179-ae60-76f62f7df78a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.144055 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebd4357-cebb-4179-ae60-76f62f7df78a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9gv72\" (UID: \"8ebd4357-cebb-4179-ae60-76f62f7df78a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.174768 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pvmh\" (UniqueName: \"kubernetes.io/projected/8ebd4357-cebb-4179-ae60-76f62f7df78a-kube-api-access-4pvmh\") pod \"dnsmasq-dns-78dd6ddcc-9gv72\" (UID: \"8ebd4357-cebb-4179-ae60-76f62f7df78a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.320901 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.515340 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-n5s7s"] Oct 04 05:04:23 crc kubenswrapper[4802]: W1004 05:04:23.525500 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6ca69b9_bd88_41ed_b3f2_e92da7a6fe68.slice/crio-2f2a5e087cc364ccee48034c090d93f0981848a1aa9bd4eab1f1cccefdc57a53 WatchSource:0}: Error finding container 2f2a5e087cc364ccee48034c090d93f0981848a1aa9bd4eab1f1cccefdc57a53: Status 404 returned error can't find the container with id 2f2a5e087cc364ccee48034c090d93f0981848a1aa9bd4eab1f1cccefdc57a53 Oct 04 05:04:23 crc kubenswrapper[4802]: I1004 05:04:23.747371 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9gv72"] Oct 04 05:04:24 crc kubenswrapper[4802]: I1004 05:04:24.145820 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"2c8c1e44715835d6ef2d00db5cee02bc888c676507b5f91dafd169007af48bd8"} Oct 04 05:04:24 crc kubenswrapper[4802]: I1004 05:04:24.147636 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" event={"ID":"8ebd4357-cebb-4179-ae60-76f62f7df78a","Type":"ContainerStarted","Data":"bb27bc3d886f80077625b10b0f3685660380ca4bb8754dacb3c7ac342a7fa9ca"} Oct 04 05:04:24 crc kubenswrapper[4802]: I1004 05:04:24.149434 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-n5s7s" event={"ID":"d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68","Type":"ContainerStarted","Data":"2f2a5e087cc364ccee48034c090d93f0981848a1aa9bd4eab1f1cccefdc57a53"} Oct 04 05:04:25 crc kubenswrapper[4802]: I1004 05:04:25.757980 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-n5s7s"] Oct 04 05:04:25 crc kubenswrapper[4802]: I1004 05:04:25.792831 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s5tcq"] Oct 04 05:04:25 crc kubenswrapper[4802]: I1004 05:04:25.794702 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" Oct 04 05:04:25 crc kubenswrapper[4802]: I1004 05:04:25.801013 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s5tcq"] Oct 04 05:04:25 crc kubenswrapper[4802]: I1004 05:04:25.884293 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d35e081-116f-47cd-91de-038af565a316-config\") pod \"dnsmasq-dns-666b6646f7-s5tcq\" (UID: \"2d35e081-116f-47cd-91de-038af565a316\") " pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" Oct 04 05:04:25 crc kubenswrapper[4802]: I1004 05:04:25.884391 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m82g5\" (UniqueName: \"kubernetes.io/projected/2d35e081-116f-47cd-91de-038af565a316-kube-api-access-m82g5\") pod \"dnsmasq-dns-666b6646f7-s5tcq\" (UID: \"2d35e081-116f-47cd-91de-038af565a316\") " pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" Oct 04 05:04:25 crc kubenswrapper[4802]: I1004 05:04:25.884430 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d35e081-116f-47cd-91de-038af565a316-dns-svc\") pod \"dnsmasq-dns-666b6646f7-s5tcq\" (UID: \"2d35e081-116f-47cd-91de-038af565a316\") " pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" Oct 04 05:04:25 crc kubenswrapper[4802]: I1004 05:04:25.986144 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d35e081-116f-47cd-91de-038af565a316-config\") pod \"dnsmasq-dns-666b6646f7-s5tcq\" (UID: \"2d35e081-116f-47cd-91de-038af565a316\") " pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" Oct 04 05:04:25 crc kubenswrapper[4802]: I1004 05:04:25.986267 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m82g5\" (UniqueName: \"kubernetes.io/projected/2d35e081-116f-47cd-91de-038af565a316-kube-api-access-m82g5\") pod \"dnsmasq-dns-666b6646f7-s5tcq\" (UID: \"2d35e081-116f-47cd-91de-038af565a316\") " pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" Oct 04 05:04:25 crc kubenswrapper[4802]: I1004 05:04:25.986306 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d35e081-116f-47cd-91de-038af565a316-dns-svc\") pod \"dnsmasq-dns-666b6646f7-s5tcq\" (UID: \"2d35e081-116f-47cd-91de-038af565a316\") " pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" Oct 04 05:04:25 crc kubenswrapper[4802]: I1004 05:04:25.987461 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d35e081-116f-47cd-91de-038af565a316-config\") pod \"dnsmasq-dns-666b6646f7-s5tcq\" (UID: \"2d35e081-116f-47cd-91de-038af565a316\") " pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" Oct 04 05:04:25 crc kubenswrapper[4802]: I1004 05:04:25.987558 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d35e081-116f-47cd-91de-038af565a316-dns-svc\") pod \"dnsmasq-dns-666b6646f7-s5tcq\" (UID: \"2d35e081-116f-47cd-91de-038af565a316\") " pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.030792 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m82g5\" (UniqueName: \"kubernetes.io/projected/2d35e081-116f-47cd-91de-038af565a316-kube-api-access-m82g5\") pod \"dnsmasq-dns-666b6646f7-s5tcq\" (UID: \"2d35e081-116f-47cd-91de-038af565a316\") " pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.130124 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.250759 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9gv72"] Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.299868 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5fgmm"] Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.302215 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.319486 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5fgmm"] Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.398117 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/790112ed-ae19-4d1e-93b6-f8291d193497-config\") pod \"dnsmasq-dns-57d769cc4f-5fgmm\" (UID: \"790112ed-ae19-4d1e-93b6-f8291d193497\") " pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.398155 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/790112ed-ae19-4d1e-93b6-f8291d193497-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5fgmm\" (UID: \"790112ed-ae19-4d1e-93b6-f8291d193497\") " pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.398248 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kssw9\" (UniqueName: \"kubernetes.io/projected/790112ed-ae19-4d1e-93b6-f8291d193497-kube-api-access-kssw9\") pod \"dnsmasq-dns-57d769cc4f-5fgmm\" (UID: \"790112ed-ae19-4d1e-93b6-f8291d193497\") " pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.499602 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kssw9\" (UniqueName: \"kubernetes.io/projected/790112ed-ae19-4d1e-93b6-f8291d193497-kube-api-access-kssw9\") pod \"dnsmasq-dns-57d769cc4f-5fgmm\" (UID: \"790112ed-ae19-4d1e-93b6-f8291d193497\") " pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.500002 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/790112ed-ae19-4d1e-93b6-f8291d193497-config\") pod \"dnsmasq-dns-57d769cc4f-5fgmm\" (UID: \"790112ed-ae19-4d1e-93b6-f8291d193497\") " pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.500020 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/790112ed-ae19-4d1e-93b6-f8291d193497-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5fgmm\" (UID: \"790112ed-ae19-4d1e-93b6-f8291d193497\") " pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.503448 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/790112ed-ae19-4d1e-93b6-f8291d193497-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5fgmm\" (UID: \"790112ed-ae19-4d1e-93b6-f8291d193497\") " pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.503519 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/790112ed-ae19-4d1e-93b6-f8291d193497-config\") pod \"dnsmasq-dns-57d769cc4f-5fgmm\" (UID: \"790112ed-ae19-4d1e-93b6-f8291d193497\") " pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.535354 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kssw9\" (UniqueName: \"kubernetes.io/projected/790112ed-ae19-4d1e-93b6-f8291d193497-kube-api-access-kssw9\") pod \"dnsmasq-dns-57d769cc4f-5fgmm\" (UID: \"790112ed-ae19-4d1e-93b6-f8291d193497\") " pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.635627 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.946408 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s5tcq"] Oct 04 05:04:26 crc kubenswrapper[4802]: W1004 05:04:26.962179 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d35e081_116f_47cd_91de_038af565a316.slice/crio-1e52d35d77141b508657a1904df6f96a1e6ada2355cc9134f1b110a1d92669e5 WatchSource:0}: Error finding container 1e52d35d77141b508657a1904df6f96a1e6ada2355cc9134f1b110a1d92669e5: Status 404 returned error can't find the container with id 1e52d35d77141b508657a1904df6f96a1e6ada2355cc9134f1b110a1d92669e5 Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.996251 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 04 05:04:26 crc kubenswrapper[4802]: I1004 05:04:26.997879 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.001171 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5zn92" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.001396 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.001588 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.003986 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.004439 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.004691 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.005284 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.033618 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.110376 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfvsd\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-kube-api-access-dfvsd\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.110457 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.110498 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.110552 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.110578 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.110621 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.110675 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.110702 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.110731 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.110755 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.110781 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.189061 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" event={"ID":"2d35e081-116f-47cd-91de-038af565a316","Type":"ContainerStarted","Data":"1e52d35d77141b508657a1904df6f96a1e6ada2355cc9134f1b110a1d92669e5"} Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.213835 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.213911 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.213948 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfvsd\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-kube-api-access-dfvsd\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.213989 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.214026 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.214083 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.214110 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.214156 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.214195 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.214217 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.214252 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.217721 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.218387 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.218842 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.219303 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.219923 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.219927 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.229474 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.235718 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.235718 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.237254 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.250096 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfvsd\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-kube-api-access-dfvsd\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.252081 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5fgmm"] Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.274954 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.381980 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.470743 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.472508 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.476338 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.477133 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.477183 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.477403 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.477529 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-l9459" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.477783 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.478076 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.478219 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.518394 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.518457 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.518496 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.518530 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.518558 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.518601 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.519985 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.520071 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.520119 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stk55\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-kube-api-access-stk55\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.520237 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.520275 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.621944 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.622022 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.622054 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.622098 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stk55\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-kube-api-access-stk55\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.622157 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.622182 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.622224 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.622252 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.622280 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.622310 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.622338 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.622594 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.623023 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.623385 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.623975 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.625179 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.626245 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.632416 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.632465 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.638661 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.646284 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.651827 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stk55\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-kube-api-access-stk55\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.677016 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.845737 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:04:27 crc kubenswrapper[4802]: I1004 05:04:27.981488 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 04 05:04:28 crc kubenswrapper[4802]: I1004 05:04:28.227167 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" event={"ID":"790112ed-ae19-4d1e-93b6-f8291d193497","Type":"ContainerStarted","Data":"fb80829f6ace88fa387ee6f6f01788b3b05df58dc412931e79376b5ae8f6780a"} Oct 04 05:04:28 crc kubenswrapper[4802]: I1004 05:04:28.230513 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf0ca60a-0bbc-41eb-bb00-c32d500506b1","Type":"ContainerStarted","Data":"e209d3a0a0808c3bfa48d59569fa2cc3ac40744b142dfae696232d46acf93b11"} Oct 04 05:04:28 crc kubenswrapper[4802]: I1004 05:04:28.575655 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 04 05:04:29 crc kubenswrapper[4802]: I1004 05:04:29.267752 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea","Type":"ContainerStarted","Data":"f892fff3613f332bdab7b0332697a714c8b9cf8a335f9aaf7eff4984144c8d05"} Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.217317 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.220281 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.227730 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.228308 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-d2l2b" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.228442 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.228966 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.229097 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.231961 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.232032 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.247253 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.248853 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.253721 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.253993 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-r2qgg" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.254155 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.254305 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307562 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307598 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t67gs\" (UniqueName: \"kubernetes.io/projected/9f6d12b3-4eff-47a3-986d-c51e9425f64f-kube-api-access-t67gs\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307615 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-config-data-default\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307654 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f6d12b3-4eff-47a3-986d-c51e9425f64f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307669 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f6d12b3-4eff-47a3-986d-c51e9425f64f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307687 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307708 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307724 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f6d12b3-4eff-47a3-986d-c51e9425f64f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307743 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-secrets\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307768 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f6d12b3-4eff-47a3-986d-c51e9425f64f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307784 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9f6d12b3-4eff-47a3-986d-c51e9425f64f-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307806 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307833 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307856 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6d12b3-4eff-47a3-986d-c51e9425f64f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307873 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-kolla-config\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307889 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgtf5\" (UniqueName: \"kubernetes.io/projected/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-kube-api-access-xgtf5\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307908 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.307940 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f6d12b3-4eff-47a3-986d-c51e9425f64f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.320145 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409479 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409528 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t67gs\" (UniqueName: \"kubernetes.io/projected/9f6d12b3-4eff-47a3-986d-c51e9425f64f-kube-api-access-t67gs\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409548 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-config-data-default\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409571 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f6d12b3-4eff-47a3-986d-c51e9425f64f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409592 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f6d12b3-4eff-47a3-986d-c51e9425f64f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409653 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409683 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f6d12b3-4eff-47a3-986d-c51e9425f64f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409700 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409719 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-secrets\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409747 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f6d12b3-4eff-47a3-986d-c51e9425f64f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409767 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9f6d12b3-4eff-47a3-986d-c51e9425f64f-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409788 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409817 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409836 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6d12b3-4eff-47a3-986d-c51e9425f64f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409855 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-kolla-config\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409873 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgtf5\" (UniqueName: \"kubernetes.io/projected/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-kube-api-access-xgtf5\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409894 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.409930 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f6d12b3-4eff-47a3-986d-c51e9425f64f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.413884 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.415238 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.415968 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.416067 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.474528 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.474716 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f6d12b3-4eff-47a3-986d-c51e9425f64f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.475494 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-secrets\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.476379 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f6d12b3-4eff-47a3-986d-c51e9425f64f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.486778 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.503507 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6d12b3-4eff-47a3-986d-c51e9425f64f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.503881 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9f6d12b3-4eff-47a3-986d-c51e9425f64f-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.509289 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-config-data-default\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.520107 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f6d12b3-4eff-47a3-986d-c51e9425f64f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.524962 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-kolla-config\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.525231 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.525288 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.526398 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f6d12b3-4eff-47a3-986d-c51e9425f64f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.532691 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.557145 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f6d12b3-4eff-47a3-986d-c51e9425f64f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.559041 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.559270 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.566885 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-tsdlb" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.574725 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t67gs\" (UniqueName: \"kubernetes.io/projected/9f6d12b3-4eff-47a3-986d-c51e9425f64f-kube-api-access-t67gs\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.584417 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgtf5\" (UniqueName: \"kubernetes.io/projected/e6167c18-4dec-48d9-bd81-1a6b6b9e6488-kube-api-access-xgtf5\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.606293 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9f6d12b3-4eff-47a3-986d-c51e9425f64f\") " pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.612376 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e6167c18-4dec-48d9-bd81-1a6b6b9e6488\") " pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.627502 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.749699 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca30c560-397d-48e1-8aa6-cf27e47b055d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.749761 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca30c560-397d-48e1-8aa6-cf27e47b055d-kolla-config\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.749799 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca30c560-397d-48e1-8aa6-cf27e47b055d-config-data\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.749831 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca30c560-397d-48e1-8aa6-cf27e47b055d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.749854 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmhcw\" (UniqueName: \"kubernetes.io/projected/ca30c560-397d-48e1-8aa6-cf27e47b055d-kube-api-access-zmhcw\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.852105 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca30c560-397d-48e1-8aa6-cf27e47b055d-config-data\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.852195 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca30c560-397d-48e1-8aa6-cf27e47b055d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.852232 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmhcw\" (UniqueName: \"kubernetes.io/projected/ca30c560-397d-48e1-8aa6-cf27e47b055d-kube-api-access-zmhcw\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.852347 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca30c560-397d-48e1-8aa6-cf27e47b055d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.852386 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca30c560-397d-48e1-8aa6-cf27e47b055d-kolla-config\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.853428 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca30c560-397d-48e1-8aa6-cf27e47b055d-config-data\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.854514 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca30c560-397d-48e1-8aa6-cf27e47b055d-kolla-config\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.858529 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca30c560-397d-48e1-8aa6-cf27e47b055d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.859260 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca30c560-397d-48e1-8aa6-cf27e47b055d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.880692 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmhcw\" (UniqueName: \"kubernetes.io/projected/ca30c560-397d-48e1-8aa6-cf27e47b055d-kube-api-access-zmhcw\") pod \"memcached-0\" (UID: \"ca30c560-397d-48e1-8aa6-cf27e47b055d\") " pod="openstack/memcached-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.905634 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 04 05:04:30 crc kubenswrapper[4802]: I1004 05:04:30.925537 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 04 05:04:32 crc kubenswrapper[4802]: I1004 05:04:32.245277 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 04 05:04:32 crc kubenswrapper[4802]: I1004 05:04:32.246626 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 04 05:04:32 crc kubenswrapper[4802]: I1004 05:04:32.250520 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-fwf2n" Oct 04 05:04:32 crc kubenswrapper[4802]: I1004 05:04:32.267508 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 04 05:04:32 crc kubenswrapper[4802]: I1004 05:04:32.391335 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqtdl\" (UniqueName: \"kubernetes.io/projected/0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9-kube-api-access-cqtdl\") pod \"kube-state-metrics-0\" (UID: \"0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9\") " pod="openstack/kube-state-metrics-0" Oct 04 05:04:32 crc kubenswrapper[4802]: I1004 05:04:32.493542 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqtdl\" (UniqueName: \"kubernetes.io/projected/0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9-kube-api-access-cqtdl\") pod \"kube-state-metrics-0\" (UID: \"0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9\") " pod="openstack/kube-state-metrics-0" Oct 04 05:04:32 crc kubenswrapper[4802]: I1004 05:04:32.544889 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqtdl\" (UniqueName: \"kubernetes.io/projected/0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9-kube-api-access-cqtdl\") pod \"kube-state-metrics-0\" (UID: \"0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9\") " pod="openstack/kube-state-metrics-0" Oct 04 05:04:32 crc kubenswrapper[4802]: I1004 05:04:32.586358 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.296994 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-75hqf"] Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.298700 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.300838 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.301397 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.313028 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-qkdm6"] Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.313889 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nr6pp" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.314556 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.329125 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75hqf"] Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.334690 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qkdm6"] Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.356776 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gj96\" (UniqueName: \"kubernetes.io/projected/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-kube-api-access-9gj96\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.356843 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-combined-ca-bundle\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.356905 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-var-run\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.357054 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-scripts\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.357103 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-var-run-ovn\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.357155 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-ovn-controller-tls-certs\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.357181 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-var-log-ovn\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459019 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/395c3b28-d30d-4457-bc17-3f88298e11a0-var-log\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459094 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-scripts\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459121 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-var-run-ovn\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459157 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-ovn-controller-tls-certs\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459176 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-var-log-ovn\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459206 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/395c3b28-d30d-4457-bc17-3f88298e11a0-var-run\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459223 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/395c3b28-d30d-4457-bc17-3f88298e11a0-etc-ovs\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459251 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/395c3b28-d30d-4457-bc17-3f88298e11a0-scripts\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459266 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/395c3b28-d30d-4457-bc17-3f88298e11a0-var-lib\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459303 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gj96\" (UniqueName: \"kubernetes.io/projected/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-kube-api-access-9gj96\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459322 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-combined-ca-bundle\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459349 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wjvc\" (UniqueName: \"kubernetes.io/projected/395c3b28-d30d-4457-bc17-3f88298e11a0-kube-api-access-2wjvc\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459626 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-var-run\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.459976 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-var-log-ovn\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.460131 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-var-run-ovn\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.460181 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-var-run\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.461883 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-scripts\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.467755 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-combined-ca-bundle\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.473306 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-ovn-controller-tls-certs\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.479893 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gj96\" (UniqueName: \"kubernetes.io/projected/f41ad0a7-949f-48d9-9871-0ce5c64e8e13-kube-api-access-9gj96\") pod \"ovn-controller-75hqf\" (UID: \"f41ad0a7-949f-48d9-9871-0ce5c64e8e13\") " pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.561680 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/395c3b28-d30d-4457-bc17-3f88298e11a0-var-run\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.561786 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/395c3b28-d30d-4457-bc17-3f88298e11a0-var-run\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.561956 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/395c3b28-d30d-4457-bc17-3f88298e11a0-etc-ovs\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.562085 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/395c3b28-d30d-4457-bc17-3f88298e11a0-scripts\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.562104 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/395c3b28-d30d-4457-bc17-3f88298e11a0-var-lib\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.562151 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wjvc\" (UniqueName: \"kubernetes.io/projected/395c3b28-d30d-4457-bc17-3f88298e11a0-kube-api-access-2wjvc\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.562155 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/395c3b28-d30d-4457-bc17-3f88298e11a0-etc-ovs\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.562234 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/395c3b28-d30d-4457-bc17-3f88298e11a0-var-log\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.562371 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/395c3b28-d30d-4457-bc17-3f88298e11a0-var-log\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.562654 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/395c3b28-d30d-4457-bc17-3f88298e11a0-var-lib\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.564156 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/395c3b28-d30d-4457-bc17-3f88298e11a0-scripts\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.577901 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wjvc\" (UniqueName: \"kubernetes.io/projected/395c3b28-d30d-4457-bc17-3f88298e11a0-kube-api-access-2wjvc\") pod \"ovn-controller-ovs-qkdm6\" (UID: \"395c3b28-d30d-4457-bc17-3f88298e11a0\") " pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.622273 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75hqf" Oct 04 05:04:36 crc kubenswrapper[4802]: I1004 05:04:36.657564 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.207634 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.210435 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.212746 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.212954 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.213083 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.214244 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-kjjnp" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.215166 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.215798 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.287299 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-config\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.287397 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.287451 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6zjj\" (UniqueName: \"kubernetes.io/projected/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-kube-api-access-b6zjj\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.287517 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.287562 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.287621 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.287688 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.287712 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.389327 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.389422 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.389478 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.389513 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.389535 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.389596 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-config\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.389619 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.389705 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6zjj\" (UniqueName: \"kubernetes.io/projected/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-kube-api-access-b6zjj\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.390224 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.390524 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.390843 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-config\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.390962 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.394584 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.395587 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.407559 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6zjj\" (UniqueName: \"kubernetes.io/projected/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-kube-api-access-b6zjj\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.407975 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.425327 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6\") " pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:37 crc kubenswrapper[4802]: I1004 05:04:37.529378 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.394829 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.396624 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.399762 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.400173 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.400359 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-57pz6" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.400514 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.413494 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.523872 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.523950 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/812c69b4-108a-4b14-83b2-95daa7c5949d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.523983 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812c69b4-108a-4b14-83b2-95daa7c5949d-config\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.524036 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/812c69b4-108a-4b14-83b2-95daa7c5949d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.524070 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/812c69b4-108a-4b14-83b2-95daa7c5949d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.524085 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/812c69b4-108a-4b14-83b2-95daa7c5949d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.524102 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22bs9\" (UniqueName: \"kubernetes.io/projected/812c69b4-108a-4b14-83b2-95daa7c5949d-kube-api-access-22bs9\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.524141 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/812c69b4-108a-4b14-83b2-95daa7c5949d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.625270 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/812c69b4-108a-4b14-83b2-95daa7c5949d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.625351 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.625384 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/812c69b4-108a-4b14-83b2-95daa7c5949d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.625413 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812c69b4-108a-4b14-83b2-95daa7c5949d-config\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.625451 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/812c69b4-108a-4b14-83b2-95daa7c5949d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.625487 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/812c69b4-108a-4b14-83b2-95daa7c5949d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.625502 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/812c69b4-108a-4b14-83b2-95daa7c5949d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.625520 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22bs9\" (UniqueName: \"kubernetes.io/projected/812c69b4-108a-4b14-83b2-95daa7c5949d-kube-api-access-22bs9\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.625670 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.626457 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/812c69b4-108a-4b14-83b2-95daa7c5949d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.627415 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/812c69b4-108a-4b14-83b2-95daa7c5949d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.627556 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812c69b4-108a-4b14-83b2-95daa7c5949d-config\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.632239 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/812c69b4-108a-4b14-83b2-95daa7c5949d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.632757 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/812c69b4-108a-4b14-83b2-95daa7c5949d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.643936 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/812c69b4-108a-4b14-83b2-95daa7c5949d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.646450 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22bs9\" (UniqueName: \"kubernetes.io/projected/812c69b4-108a-4b14-83b2-95daa7c5949d-kube-api-access-22bs9\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.650001 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"812c69b4-108a-4b14-83b2-95daa7c5949d\") " pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:39 crc kubenswrapper[4802]: I1004 05:04:39.726969 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 04 05:04:52 crc kubenswrapper[4802]: E1004 05:04:52.578097 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Oct 04 05:04:52 crc kubenswrapper[4802]: E1004 05:04:52.579103 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfvsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(cf0ca60a-0bbc-41eb-bb00-c32d500506b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 04 05:04:52 crc kubenswrapper[4802]: E1004 05:04:52.580404 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="cf0ca60a-0bbc-41eb-bb00-c32d500506b1" Oct 04 05:04:52 crc kubenswrapper[4802]: E1004 05:04:52.607691 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="cf0ca60a-0bbc-41eb-bb00-c32d500506b1" Oct 04 05:04:52 crc kubenswrapper[4802]: E1004 05:04:52.621748 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Oct 04 05:04:52 crc kubenswrapper[4802]: E1004 05:04:52.622177 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-stk55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(78c4949d-d61b-4d3e-aa27-7c8bc4da81ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 04 05:04:52 crc kubenswrapper[4802]: E1004 05:04:52.623427 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" Oct 04 05:04:53 crc kubenswrapper[4802]: E1004 05:04:53.615389 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" Oct 04 05:04:57 crc kubenswrapper[4802]: I1004 05:04:57.471883 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qkdm6"] Oct 04 05:04:57 crc kubenswrapper[4802]: I1004 05:04:57.556145 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 04 05:04:57 crc kubenswrapper[4802]: W1004 05:04:57.877066 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod395c3b28_d30d_4457_bc17_3f88298e11a0.slice/crio-d98bda829aefb8855121f35ca45c71459bf8612cec393ad9d798babd02f1eb1d WatchSource:0}: Error finding container d98bda829aefb8855121f35ca45c71459bf8612cec393ad9d798babd02f1eb1d: Status 404 returned error can't find the container with id d98bda829aefb8855121f35ca45c71459bf8612cec393ad9d798babd02f1eb1d Oct 04 05:04:57 crc kubenswrapper[4802]: E1004 05:04:57.915209 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 04 05:04:57 crc kubenswrapper[4802]: E1004 05:04:57.915358 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tltqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-n5s7s_openstack(d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 04 05:04:57 crc kubenswrapper[4802]: E1004 05:04:57.916727 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-n5s7s" podUID="d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68" Oct 04 05:04:57 crc kubenswrapper[4802]: E1004 05:04:57.927820 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 04 05:04:57 crc kubenswrapper[4802]: E1004 05:04:57.927975 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m82g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-s5tcq_openstack(2d35e081-116f-47cd-91de-038af565a316): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 04 05:04:57 crc kubenswrapper[4802]: E1004 05:04:57.929117 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" podUID="2d35e081-116f-47cd-91de-038af565a316" Oct 04 05:04:58 crc kubenswrapper[4802]: E1004 05:04:58.026891 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 04 05:04:58 crc kubenswrapper[4802]: E1004 05:04:58.027047 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pvmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-9gv72_openstack(8ebd4357-cebb-4179-ae60-76f62f7df78a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 04 05:04:58 crc kubenswrapper[4802]: E1004 05:04:58.028435 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" podUID="8ebd4357-cebb-4179-ae60-76f62f7df78a" Oct 04 05:04:58 crc kubenswrapper[4802]: E1004 05:04:58.033778 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 04 05:04:58 crc kubenswrapper[4802]: E1004 05:04:58.034483 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kssw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-5fgmm_openstack(790112ed-ae19-4d1e-93b6-f8291d193497): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 04 05:04:58 crc kubenswrapper[4802]: E1004 05:04:58.035663 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" podUID="790112ed-ae19-4d1e-93b6-f8291d193497" Oct 04 05:04:58 crc kubenswrapper[4802]: W1004 05:04:58.418810 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ab1386a_37b3_4fd5_a3c6_56fd5434b7b9.slice/crio-7d42a0ea910387e1149a715c11b55b3266347765d38d77995029bd052a364f02 WatchSource:0}: Error finding container 7d42a0ea910387e1149a715c11b55b3266347765d38d77995029bd052a364f02: Status 404 returned error can't find the container with id 7d42a0ea910387e1149a715c11b55b3266347765d38d77995029bd052a364f02 Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.422284 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.480719 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75hqf"] Oct 04 05:04:58 crc kubenswrapper[4802]: W1004 05:04:58.487315 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6167c18_4dec_48d9_bd81_1a6b6b9e6488.slice/crio-9b6844ec9b6ab0bc6e0eaf0bc62aed44e35a8f244eccaf43e7659f3fda2d80ff WatchSource:0}: Error finding container 9b6844ec9b6ab0bc6e0eaf0bc62aed44e35a8f244eccaf43e7659f3fda2d80ff: Status 404 returned error can't find the container with id 9b6844ec9b6ab0bc6e0eaf0bc62aed44e35a8f244eccaf43e7659f3fda2d80ff Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.490231 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.497970 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.504335 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.658474 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qkdm6" event={"ID":"395c3b28-d30d-4457-bc17-3f88298e11a0","Type":"ContainerStarted","Data":"d98bda829aefb8855121f35ca45c71459bf8612cec393ad9d798babd02f1eb1d"} Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.659965 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ca30c560-397d-48e1-8aa6-cf27e47b055d","Type":"ContainerStarted","Data":"b2a24c9bf55fd581f4c3b4a653f68ae5d240366db18d49996117c53fefd0eb72"} Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.661509 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6","Type":"ContainerStarted","Data":"dc472e9ec00867a4934b393d499cb3d8af2e61f82e3f1756f4df5563613d61d6"} Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.663348 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e6167c18-4dec-48d9-bd81-1a6b6b9e6488","Type":"ContainerStarted","Data":"9b6844ec9b6ab0bc6e0eaf0bc62aed44e35a8f244eccaf43e7659f3fda2d80ff"} Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.666367 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75hqf" event={"ID":"f41ad0a7-949f-48d9-9871-0ce5c64e8e13","Type":"ContainerStarted","Data":"9e79ccc3ad362ca8bf7ff1ad4e300acae894885139590b962acee9b6f952e5c9"} Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.667783 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9","Type":"ContainerStarted","Data":"7d42a0ea910387e1149a715c11b55b3266347765d38d77995029bd052a364f02"} Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.669822 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f6d12b3-4eff-47a3-986d-c51e9425f64f","Type":"ContainerStarted","Data":"8d33b8e8a1500eac9204c269f83470dded990c8e1e64165db326d6ea72f1329d"} Oct 04 05:04:58 crc kubenswrapper[4802]: E1004 05:04:58.674985 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" podUID="2d35e081-116f-47cd-91de-038af565a316" Oct 04 05:04:58 crc kubenswrapper[4802]: E1004 05:04:58.677911 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" podUID="790112ed-ae19-4d1e-93b6-f8291d193497" Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.697002 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 04 05:04:58 crc kubenswrapper[4802]: W1004 05:04:58.737967 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod812c69b4_108a_4b14_83b2_95daa7c5949d.slice/crio-5f21078b8466c63ce6f3b2066b5d23aa8e1e78efdda64704b97ebc858453333c WatchSource:0}: Error finding container 5f21078b8466c63ce6f3b2066b5d23aa8e1e78efdda64704b97ebc858453333c: Status 404 returned error can't find the container with id 5f21078b8466c63ce6f3b2066b5d23aa8e1e78efdda64704b97ebc858453333c Oct 04 05:04:58 crc kubenswrapper[4802]: I1004 05:04:58.976589 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-n5s7s" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.095438 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tltqj\" (UniqueName: \"kubernetes.io/projected/d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68-kube-api-access-tltqj\") pod \"d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68\" (UID: \"d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68\") " Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.095494 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68-config\") pod \"d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68\" (UID: \"d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68\") " Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.096456 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68-config" (OuterVolumeSpecName: "config") pod "d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68" (UID: "d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.103785 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68-kube-api-access-tltqj" (OuterVolumeSpecName: "kube-api-access-tltqj") pod "d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68" (UID: "d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68"). InnerVolumeSpecName "kube-api-access-tltqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.172412 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.198710 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tltqj\" (UniqueName: \"kubernetes.io/projected/d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68-kube-api-access-tltqj\") on node \"crc\" DevicePath \"\"" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.198759 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.299578 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebd4357-cebb-4179-ae60-76f62f7df78a-config\") pod \"8ebd4357-cebb-4179-ae60-76f62f7df78a\" (UID: \"8ebd4357-cebb-4179-ae60-76f62f7df78a\") " Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.299700 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebd4357-cebb-4179-ae60-76f62f7df78a-dns-svc\") pod \"8ebd4357-cebb-4179-ae60-76f62f7df78a\" (UID: \"8ebd4357-cebb-4179-ae60-76f62f7df78a\") " Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.299833 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pvmh\" (UniqueName: \"kubernetes.io/projected/8ebd4357-cebb-4179-ae60-76f62f7df78a-kube-api-access-4pvmh\") pod \"8ebd4357-cebb-4179-ae60-76f62f7df78a\" (UID: \"8ebd4357-cebb-4179-ae60-76f62f7df78a\") " Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.300366 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebd4357-cebb-4179-ae60-76f62f7df78a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ebd4357-cebb-4179-ae60-76f62f7df78a" (UID: "8ebd4357-cebb-4179-ae60-76f62f7df78a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.301612 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebd4357-cebb-4179-ae60-76f62f7df78a-config" (OuterVolumeSpecName: "config") pod "8ebd4357-cebb-4179-ae60-76f62f7df78a" (UID: "8ebd4357-cebb-4179-ae60-76f62f7df78a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.304062 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ebd4357-cebb-4179-ae60-76f62f7df78a-kube-api-access-4pvmh" (OuterVolumeSpecName: "kube-api-access-4pvmh") pod "8ebd4357-cebb-4179-ae60-76f62f7df78a" (UID: "8ebd4357-cebb-4179-ae60-76f62f7df78a"). InnerVolumeSpecName "kube-api-access-4pvmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.404407 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pvmh\" (UniqueName: \"kubernetes.io/projected/8ebd4357-cebb-4179-ae60-76f62f7df78a-kube-api-access-4pvmh\") on node \"crc\" DevicePath \"\"" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.404849 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebd4357-cebb-4179-ae60-76f62f7df78a-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.404965 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebd4357-cebb-4179-ae60-76f62f7df78a-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.599190 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wdhwb"] Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.600764 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.604588 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.612047 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wdhwb"] Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.695184 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-n5s7s" event={"ID":"d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68","Type":"ContainerDied","Data":"2f2a5e087cc364ccee48034c090d93f0981848a1aa9bd4eab1f1cccefdc57a53"} Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.695215 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-n5s7s" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.700982 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"812c69b4-108a-4b14-83b2-95daa7c5949d","Type":"ContainerStarted","Data":"5f21078b8466c63ce6f3b2066b5d23aa8e1e78efdda64704b97ebc858453333c"} Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.709105 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" event={"ID":"8ebd4357-cebb-4179-ae60-76f62f7df78a","Type":"ContainerDied","Data":"bb27bc3d886f80077625b10b0f3685660380ca4bb8754dacb3c7ac342a7fa9ca"} Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.709227 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9gv72" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.709451 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k4bs\" (UniqueName: \"kubernetes.io/projected/61c07999-4012-414b-8762-b64f9ed38503-kube-api-access-2k4bs\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.709495 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/61c07999-4012-414b-8762-b64f9ed38503-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.709522 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/61c07999-4012-414b-8762-b64f9ed38503-ovn-rundir\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.709557 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61c07999-4012-414b-8762-b64f9ed38503-config\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.709582 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61c07999-4012-414b-8762-b64f9ed38503-combined-ca-bundle\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.709606 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/61c07999-4012-414b-8762-b64f9ed38503-ovs-rundir\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.761968 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s5tcq"] Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.815926 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61c07999-4012-414b-8762-b64f9ed38503-config\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.815980 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61c07999-4012-414b-8762-b64f9ed38503-combined-ca-bundle\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.816012 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/61c07999-4012-414b-8762-b64f9ed38503-ovs-rundir\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.816139 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k4bs\" (UniqueName: \"kubernetes.io/projected/61c07999-4012-414b-8762-b64f9ed38503-kube-api-access-2k4bs\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.816175 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/61c07999-4012-414b-8762-b64f9ed38503-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.816214 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/61c07999-4012-414b-8762-b64f9ed38503-ovn-rundir\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.816492 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/61c07999-4012-414b-8762-b64f9ed38503-ovn-rundir\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.816834 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61c07999-4012-414b-8762-b64f9ed38503-config\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.817336 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/61c07999-4012-414b-8762-b64f9ed38503-ovs-rundir\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.821397 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-n5s7s"] Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.837736 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/61c07999-4012-414b-8762-b64f9ed38503-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.841143 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-n5s7s"] Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.860231 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61c07999-4012-414b-8762-b64f9ed38503-combined-ca-bundle\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.864080 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k4bs\" (UniqueName: \"kubernetes.io/projected/61c07999-4012-414b-8762-b64f9ed38503-kube-api-access-2k4bs\") pod \"ovn-controller-metrics-wdhwb\" (UID: \"61c07999-4012-414b-8762-b64f9ed38503\") " pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.877772 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-drsh5"] Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.879302 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.880869 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.899678 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-drsh5"] Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.918204 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-config\") pod \"dnsmasq-dns-7fd796d7df-drsh5\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.918264 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzs55\" (UniqueName: \"kubernetes.io/projected/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-kube-api-access-lzs55\") pod \"dnsmasq-dns-7fd796d7df-drsh5\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.918283 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-drsh5\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.918318 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-drsh5\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.961481 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9gv72"] Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.988550 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wdhwb" Oct 04 05:04:59 crc kubenswrapper[4802]: I1004 05:04:59.996163 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9gv72"] Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.020612 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-config\") pod \"dnsmasq-dns-7fd796d7df-drsh5\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.020748 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzs55\" (UniqueName: \"kubernetes.io/projected/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-kube-api-access-lzs55\") pod \"dnsmasq-dns-7fd796d7df-drsh5\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.020776 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-drsh5\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.020815 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-drsh5\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.022423 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-drsh5\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.022465 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-drsh5\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.025255 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5fgmm"] Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.027738 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-config\") pod \"dnsmasq-dns-7fd796d7df-drsh5\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.046206 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gkmb5"] Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.047896 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.055442 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.062451 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gkmb5"] Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.074205 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzs55\" (UniqueName: \"kubernetes.io/projected/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-kube-api-access-lzs55\") pod \"dnsmasq-dns-7fd796d7df-drsh5\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.122207 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.122263 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-config\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.122311 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.122330 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.122398 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9z6h\" (UniqueName: \"kubernetes.io/projected/96bc52e4-8853-4e2e-9246-17b6644e096b-kube-api-access-j9z6h\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.202528 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.223626 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.223683 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.223761 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9z6h\" (UniqueName: \"kubernetes.io/projected/96bc52e4-8853-4e2e-9246-17b6644e096b-kube-api-access-j9z6h\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.223794 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.223819 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-config\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.224844 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-config\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.225524 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.225978 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.226425 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.247295 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9z6h\" (UniqueName: \"kubernetes.io/projected/96bc52e4-8853-4e2e-9246-17b6644e096b-kube-api-access-j9z6h\") pod \"dnsmasq-dns-86db49b7ff-gkmb5\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.373869 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ebd4357-cebb-4179-ae60-76f62f7df78a" path="/var/lib/kubelet/pods/8ebd4357-cebb-4179-ae60-76f62f7df78a/volumes" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.374218 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68" path="/var/lib/kubelet/pods/d6ca69b9-bd88-41ed-b3f2-e92da7a6fe68/volumes" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.411070 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.675623 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.720811 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" event={"ID":"2d35e081-116f-47cd-91de-038af565a316","Type":"ContainerDied","Data":"1e52d35d77141b508657a1904df6f96a1e6ada2355cc9134f1b110a1d92669e5"} Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.720947 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s5tcq" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.736350 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d35e081-116f-47cd-91de-038af565a316-config\") pod \"2d35e081-116f-47cd-91de-038af565a316\" (UID: \"2d35e081-116f-47cd-91de-038af565a316\") " Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.736444 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d35e081-116f-47cd-91de-038af565a316-dns-svc\") pod \"2d35e081-116f-47cd-91de-038af565a316\" (UID: \"2d35e081-116f-47cd-91de-038af565a316\") " Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.736730 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m82g5\" (UniqueName: \"kubernetes.io/projected/2d35e081-116f-47cd-91de-038af565a316-kube-api-access-m82g5\") pod \"2d35e081-116f-47cd-91de-038af565a316\" (UID: \"2d35e081-116f-47cd-91de-038af565a316\") " Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.737021 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d35e081-116f-47cd-91de-038af565a316-config" (OuterVolumeSpecName: "config") pod "2d35e081-116f-47cd-91de-038af565a316" (UID: "2d35e081-116f-47cd-91de-038af565a316"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.737446 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d35e081-116f-47cd-91de-038af565a316-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d35e081-116f-47cd-91de-038af565a316" (UID: "2d35e081-116f-47cd-91de-038af565a316"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.748279 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d35e081-116f-47cd-91de-038af565a316-kube-api-access-m82g5" (OuterVolumeSpecName: "kube-api-access-m82g5") pod "2d35e081-116f-47cd-91de-038af565a316" (UID: "2d35e081-116f-47cd-91de-038af565a316"). InnerVolumeSpecName "kube-api-access-m82g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.838407 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m82g5\" (UniqueName: \"kubernetes.io/projected/2d35e081-116f-47cd-91de-038af565a316-kube-api-access-m82g5\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.838442 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d35e081-116f-47cd-91de-038af565a316-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:00 crc kubenswrapper[4802]: I1004 05:05:00.838453 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d35e081-116f-47cd-91de-038af565a316-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:01 crc kubenswrapper[4802]: I1004 05:05:01.090304 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s5tcq"] Oct 04 05:05:01 crc kubenswrapper[4802]: I1004 05:05:01.095820 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s5tcq"] Oct 04 05:05:01 crc kubenswrapper[4802]: I1004 05:05:01.183907 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wdhwb"] Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.368256 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d35e081-116f-47cd-91de-038af565a316" path="/var/lib/kubelet/pods/2d35e081-116f-47cd-91de-038af565a316/volumes" Oct 04 05:05:02 crc kubenswrapper[4802]: W1004 05:05:02.510779 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61c07999_4012_414b_8762_b64f9ed38503.slice/crio-dbb3b7216c84cfde4781bce6ccd9daafda38e0e1992bb1b076c1b58bcf9bb6a0 WatchSource:0}: Error finding container dbb3b7216c84cfde4781bce6ccd9daafda38e0e1992bb1b076c1b58bcf9bb6a0: Status 404 returned error can't find the container with id dbb3b7216c84cfde4781bce6ccd9daafda38e0e1992bb1b076c1b58bcf9bb6a0 Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.545691 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.571355 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kssw9\" (UniqueName: \"kubernetes.io/projected/790112ed-ae19-4d1e-93b6-f8291d193497-kube-api-access-kssw9\") pod \"790112ed-ae19-4d1e-93b6-f8291d193497\" (UID: \"790112ed-ae19-4d1e-93b6-f8291d193497\") " Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.571539 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/790112ed-ae19-4d1e-93b6-f8291d193497-dns-svc\") pod \"790112ed-ae19-4d1e-93b6-f8291d193497\" (UID: \"790112ed-ae19-4d1e-93b6-f8291d193497\") " Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.571567 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/790112ed-ae19-4d1e-93b6-f8291d193497-config\") pod \"790112ed-ae19-4d1e-93b6-f8291d193497\" (UID: \"790112ed-ae19-4d1e-93b6-f8291d193497\") " Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.572419 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/790112ed-ae19-4d1e-93b6-f8291d193497-config" (OuterVolumeSpecName: "config") pod "790112ed-ae19-4d1e-93b6-f8291d193497" (UID: "790112ed-ae19-4d1e-93b6-f8291d193497"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.573600 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/790112ed-ae19-4d1e-93b6-f8291d193497-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "790112ed-ae19-4d1e-93b6-f8291d193497" (UID: "790112ed-ae19-4d1e-93b6-f8291d193497"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.578459 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790112ed-ae19-4d1e-93b6-f8291d193497-kube-api-access-kssw9" (OuterVolumeSpecName: "kube-api-access-kssw9") pod "790112ed-ae19-4d1e-93b6-f8291d193497" (UID: "790112ed-ae19-4d1e-93b6-f8291d193497"). InnerVolumeSpecName "kube-api-access-kssw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.673804 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/790112ed-ae19-4d1e-93b6-f8291d193497-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.673853 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/790112ed-ae19-4d1e-93b6-f8291d193497-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.673864 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kssw9\" (UniqueName: \"kubernetes.io/projected/790112ed-ae19-4d1e-93b6-f8291d193497-kube-api-access-kssw9\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.737695 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" event={"ID":"790112ed-ae19-4d1e-93b6-f8291d193497","Type":"ContainerDied","Data":"fb80829f6ace88fa387ee6f6f01788b3b05df58dc412931e79376b5ae8f6780a"} Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.737791 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5fgmm" Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.746756 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wdhwb" event={"ID":"61c07999-4012-414b-8762-b64f9ed38503","Type":"ContainerStarted","Data":"dbb3b7216c84cfde4781bce6ccd9daafda38e0e1992bb1b076c1b58bcf9bb6a0"} Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.800321 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5fgmm"] Oct 04 05:05:02 crc kubenswrapper[4802]: I1004 05:05:02.810316 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5fgmm"] Oct 04 05:05:04 crc kubenswrapper[4802]: I1004 05:05:04.378415 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="790112ed-ae19-4d1e-93b6-f8291d193497" path="/var/lib/kubelet/pods/790112ed-ae19-4d1e-93b6-f8291d193497/volumes" Oct 04 05:05:06 crc kubenswrapper[4802]: I1004 05:05:06.005914 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-drsh5"] Oct 04 05:05:06 crc kubenswrapper[4802]: W1004 05:05:06.282860 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02b5b28c_a2e0_4a8a_ac74_b2398e19976d.slice/crio-4896e6bb4480a71e85cb0bd7ad25d3c53023395cf1ca279b671c4655b19fd4c1 WatchSource:0}: Error finding container 4896e6bb4480a71e85cb0bd7ad25d3c53023395cf1ca279b671c4655b19fd4c1: Status 404 returned error can't find the container with id 4896e6bb4480a71e85cb0bd7ad25d3c53023395cf1ca279b671c4655b19fd4c1 Oct 04 05:05:06 crc kubenswrapper[4802]: I1004 05:05:06.680376 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gkmb5"] Oct 04 05:05:06 crc kubenswrapper[4802]: I1004 05:05:06.780560 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" event={"ID":"02b5b28c-a2e0-4a8a-ac74-b2398e19976d","Type":"ContainerStarted","Data":"4896e6bb4480a71e85cb0bd7ad25d3c53023395cf1ca279b671c4655b19fd4c1"} Oct 04 05:05:07 crc kubenswrapper[4802]: W1004 05:05:07.808430 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96bc52e4_8853_4e2e_9246_17b6644e096b.slice/crio-8ff4fe503d5cb87a1d008bd2fed2343433d9ea24fb99020f9ee7cb86cfee7905 WatchSource:0}: Error finding container 8ff4fe503d5cb87a1d008bd2fed2343433d9ea24fb99020f9ee7cb86cfee7905: Status 404 returned error can't find the container with id 8ff4fe503d5cb87a1d008bd2fed2343433d9ea24fb99020f9ee7cb86cfee7905 Oct 04 05:05:08 crc kubenswrapper[4802]: I1004 05:05:08.803536 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qkdm6" event={"ID":"395c3b28-d30d-4457-bc17-3f88298e11a0","Type":"ContainerStarted","Data":"afd2d6c1107f1d35bb8ff564055469e064196567742860f498750789ea4d0cbc"} Oct 04 05:05:08 crc kubenswrapper[4802]: I1004 05:05:08.809398 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" event={"ID":"96bc52e4-8853-4e2e-9246-17b6644e096b","Type":"ContainerStarted","Data":"8ff4fe503d5cb87a1d008bd2fed2343433d9ea24fb99020f9ee7cb86cfee7905"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.816816 4802 generic.go:334] "Generic (PLEG): container finished" podID="395c3b28-d30d-4457-bc17-3f88298e11a0" containerID="afd2d6c1107f1d35bb8ff564055469e064196567742860f498750789ea4d0cbc" exitCode=0 Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.816879 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qkdm6" event={"ID":"395c3b28-d30d-4457-bc17-3f88298e11a0","Type":"ContainerDied","Data":"afd2d6c1107f1d35bb8ff564055469e064196567742860f498750789ea4d0cbc"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.819145 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wdhwb" event={"ID":"61c07999-4012-414b-8762-b64f9ed38503","Type":"ContainerStarted","Data":"44dbf66a703e3a586b643e6e65dce06a4b257f1b78cb2bb219e78a737f9d6648"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.821476 4802 generic.go:334] "Generic (PLEG): container finished" podID="02b5b28c-a2e0-4a8a-ac74-b2398e19976d" containerID="b30f6e5031ebc8cdab29c2fe3ed7832ed5669aef10613c5611ce6c5905ff163c" exitCode=0 Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.821582 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" event={"ID":"02b5b28c-a2e0-4a8a-ac74-b2398e19976d","Type":"ContainerDied","Data":"b30f6e5031ebc8cdab29c2fe3ed7832ed5669aef10613c5611ce6c5905ff163c"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.823726 4802 generic.go:334] "Generic (PLEG): container finished" podID="96bc52e4-8853-4e2e-9246-17b6644e096b" containerID="849c6c0b5f154ddcc90fc4f58470669c3e40595dce3b5ffb3f4168edf073b9f4" exitCode=0 Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.823812 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" event={"ID":"96bc52e4-8853-4e2e-9246-17b6644e096b","Type":"ContainerDied","Data":"849c6c0b5f154ddcc90fc4f58470669c3e40595dce3b5ffb3f4168edf073b9f4"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.833032 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e6167c18-4dec-48d9-bd81-1a6b6b9e6488","Type":"ContainerStarted","Data":"19f46eef9e10896bc9def7e0ee86e51f313f7f361821696d3dfa0cf1634cfe92"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.838383 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75hqf" event={"ID":"f41ad0a7-949f-48d9-9871-0ce5c64e8e13","Type":"ContainerStarted","Data":"1d4f8795dbe826051f53c47fe9ee4d5cad5b9b08fc4c02f951233678278e6eae"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.838729 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-75hqf" Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.844762 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"812c69b4-108a-4b14-83b2-95daa7c5949d","Type":"ContainerStarted","Data":"05b6a7f7dd86a542cc9d23ee05407a219b413d876af6092538f4c6fa0462eaa0"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.844802 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"812c69b4-108a-4b14-83b2-95daa7c5949d","Type":"ContainerStarted","Data":"b5a4bc05ec73a1d45b4bd5a0f9acf2aa97db4a828c89433616d071ad7b7a0573"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.849064 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea","Type":"ContainerStarted","Data":"a6014bcef3cbb5ec832574558747ef4a77d2d25f64247c10c2a30cebfaee1b8e"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.851967 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ca30c560-397d-48e1-8aa6-cf27e47b055d","Type":"ContainerStarted","Data":"59aa8cbd6ef708da66a5aa40a5c59695d0817defb0d96cc64b48ce41f34c7b9c"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.852212 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.854043 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6","Type":"ContainerStarted","Data":"d5f2f58c91383a5e0e6eab5d9ab8f06c6b0db8d7c1ff7538cdb80ad8dfb4fbda"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.854076 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6","Type":"ContainerStarted","Data":"19e4362271ac5c776421cc087004f8a89ab5dd98db5c6298834bb3e767ee6ad1"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.858920 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9","Type":"ContainerStarted","Data":"0049e92994981693b2d762ab30dc7eb17bb975cb34128319e0cedd0949a81d3e"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.859916 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.861743 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f6d12b3-4eff-47a3-986d-c51e9425f64f","Type":"ContainerStarted","Data":"9b9e2140e5ab698789930395ddc904d424b6f41fe07e0db2e2fb0ad7811712f9"} Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.933903 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-75hqf" podStartSLOduration=24.244521906 podStartE2EDuration="33.933884993s" podCreationTimestamp="2025-10-04 05:04:36 +0000 UTC" firstStartedPulling="2025-10-04 05:04:58.485131343 +0000 UTC m=+1140.893131968" lastFinishedPulling="2025-10-04 05:05:08.17449443 +0000 UTC m=+1150.582495055" observedRunningTime="2025-10-04 05:05:09.928668004 +0000 UTC m=+1152.336668649" watchObservedRunningTime="2025-10-04 05:05:09.933884993 +0000 UTC m=+1152.341885618" Oct 04 05:05:09 crc kubenswrapper[4802]: I1004 05:05:09.967929 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wdhwb" podStartSLOduration=5.058053175 podStartE2EDuration="10.967915906s" podCreationTimestamp="2025-10-04 05:04:59 +0000 UTC" firstStartedPulling="2025-10-04 05:05:02.514063738 +0000 UTC m=+1144.922064363" lastFinishedPulling="2025-10-04 05:05:08.423926479 +0000 UTC m=+1150.831927094" observedRunningTime="2025-10-04 05:05:09.967104223 +0000 UTC m=+1152.375104848" watchObservedRunningTime="2025-10-04 05:05:09.967915906 +0000 UTC m=+1152.375916531" Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.028169 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=22.584108371 podStartE2EDuration="32.028139227s" podCreationTimestamp="2025-10-04 05:04:38 +0000 UTC" firstStartedPulling="2025-10-04 05:04:58.74325601 +0000 UTC m=+1141.151256635" lastFinishedPulling="2025-10-04 05:05:08.187286866 +0000 UTC m=+1150.595287491" observedRunningTime="2025-10-04 05:05:10.0191442 +0000 UTC m=+1152.427144825" watchObservedRunningTime="2025-10-04 05:05:10.028139227 +0000 UTC m=+1152.436139852" Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.042267 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=31.081023141 podStartE2EDuration="40.04224383s" podCreationTimestamp="2025-10-04 05:04:30 +0000 UTC" firstStartedPulling="2025-10-04 05:04:58.486656366 +0000 UTC m=+1140.894656991" lastFinishedPulling="2025-10-04 05:05:07.447877055 +0000 UTC m=+1149.855877680" observedRunningTime="2025-10-04 05:05:10.034344604 +0000 UTC m=+1152.442345229" watchObservedRunningTime="2025-10-04 05:05:10.04224383 +0000 UTC m=+1152.450244455" Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.106952 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=28.343958058 podStartE2EDuration="38.106933819s" podCreationTimestamp="2025-10-04 05:04:32 +0000 UTC" firstStartedPulling="2025-10-04 05:04:58.428103303 +0000 UTC m=+1140.836103928" lastFinishedPulling="2025-10-04 05:05:08.191079064 +0000 UTC m=+1150.599079689" observedRunningTime="2025-10-04 05:05:10.104935572 +0000 UTC m=+1152.512936197" watchObservedRunningTime="2025-10-04 05:05:10.106933819 +0000 UTC m=+1152.514934434" Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.128481 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=23.82863851 podStartE2EDuration="34.128466034s" podCreationTimestamp="2025-10-04 05:04:36 +0000 UTC" firstStartedPulling="2025-10-04 05:04:57.88741447 +0000 UTC m=+1140.295415095" lastFinishedPulling="2025-10-04 05:05:08.187241994 +0000 UTC m=+1150.595242619" observedRunningTime="2025-10-04 05:05:10.127157647 +0000 UTC m=+1152.535158292" watchObservedRunningTime="2025-10-04 05:05:10.128466034 +0000 UTC m=+1152.536466659" Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.530179 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.873785 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qkdm6" event={"ID":"395c3b28-d30d-4457-bc17-3f88298e11a0","Type":"ContainerStarted","Data":"db77fe7f77615a5f8fb6da627c1f5718dd0baff304eec87232c6bcbb5db27a94"} Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.873837 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qkdm6" event={"ID":"395c3b28-d30d-4457-bc17-3f88298e11a0","Type":"ContainerStarted","Data":"a9d66fff2482248310f672ce5e3c4c8826113051d31ca356c2493e8a772b0e0c"} Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.874595 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.874653 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.876502 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf0ca60a-0bbc-41eb-bb00-c32d500506b1","Type":"ContainerStarted","Data":"ac14ea64413095013dde2209208afc2007b001a031ab19bcb37134490027462d"} Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.879806 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" event={"ID":"02b5b28c-a2e0-4a8a-ac74-b2398e19976d","Type":"ContainerStarted","Data":"d5c4e10cb5756d071ceb5f39d45ae53bfbccec6a4eff280a67b37b2ee095bf7f"} Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.880400 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.882613 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" event={"ID":"96bc52e4-8853-4e2e-9246-17b6644e096b","Type":"ContainerStarted","Data":"09a70596aac7d60dd1bfd9809e6250c5e21f0559c0b925650025d52947f19b19"} Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.901014 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-qkdm6" podStartSLOduration=26.434053472 podStartE2EDuration="34.900994253s" podCreationTimestamp="2025-10-04 05:04:36 +0000 UTC" firstStartedPulling="2025-10-04 05:04:57.88774821 +0000 UTC m=+1140.295748835" lastFinishedPulling="2025-10-04 05:05:06.354688991 +0000 UTC m=+1148.762689616" observedRunningTime="2025-10-04 05:05:10.89846133 +0000 UTC m=+1153.306461955" watchObservedRunningTime="2025-10-04 05:05:10.900994253 +0000 UTC m=+1153.308994888" Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.922211 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" podStartSLOduration=9.810286672 podStartE2EDuration="11.922181818s" podCreationTimestamp="2025-10-04 05:04:59 +0000 UTC" firstStartedPulling="2025-10-04 05:05:06.309811039 +0000 UTC m=+1148.717811664" lastFinishedPulling="2025-10-04 05:05:08.421706185 +0000 UTC m=+1150.829706810" observedRunningTime="2025-10-04 05:05:10.917830324 +0000 UTC m=+1153.325830949" watchObservedRunningTime="2025-10-04 05:05:10.922181818 +0000 UTC m=+1153.330182443" Oct 04 05:05:10 crc kubenswrapper[4802]: I1004 05:05:10.966325 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" podStartSLOduration=11.290044442 podStartE2EDuration="11.96630947s" podCreationTimestamp="2025-10-04 05:04:59 +0000 UTC" firstStartedPulling="2025-10-04 05:05:07.810843447 +0000 UTC m=+1150.218844082" lastFinishedPulling="2025-10-04 05:05:08.487108485 +0000 UTC m=+1150.895109110" observedRunningTime="2025-10-04 05:05:10.965083885 +0000 UTC m=+1153.373084520" watchObservedRunningTime="2025-10-04 05:05:10.96630947 +0000 UTC m=+1153.374310095" Oct 04 05:05:11 crc kubenswrapper[4802]: I1004 05:05:11.889307 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:12 crc kubenswrapper[4802]: I1004 05:05:12.530438 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 04 05:05:12 crc kubenswrapper[4802]: I1004 05:05:12.727894 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 04 05:05:12 crc kubenswrapper[4802]: I1004 05:05:12.764514 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 04 05:05:12 crc kubenswrapper[4802]: I1004 05:05:12.896697 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 04 05:05:13 crc kubenswrapper[4802]: I1004 05:05:13.566119 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 04 05:05:13 crc kubenswrapper[4802]: I1004 05:05:13.908961 4802 generic.go:334] "Generic (PLEG): container finished" podID="9f6d12b3-4eff-47a3-986d-c51e9425f64f" containerID="9b9e2140e5ab698789930395ddc904d424b6f41fe07e0db2e2fb0ad7811712f9" exitCode=0 Oct 04 05:05:13 crc kubenswrapper[4802]: I1004 05:05:13.909027 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f6d12b3-4eff-47a3-986d-c51e9425f64f","Type":"ContainerDied","Data":"9b9e2140e5ab698789930395ddc904d424b6f41fe07e0db2e2fb0ad7811712f9"} Oct 04 05:05:13 crc kubenswrapper[4802]: I1004 05:05:13.913718 4802 generic.go:334] "Generic (PLEG): container finished" podID="e6167c18-4dec-48d9-bd81-1a6b6b9e6488" containerID="19f46eef9e10896bc9def7e0ee86e51f313f7f361821696d3dfa0cf1634cfe92" exitCode=0 Oct 04 05:05:13 crc kubenswrapper[4802]: I1004 05:05:13.913825 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e6167c18-4dec-48d9-bd81-1a6b6b9e6488","Type":"ContainerDied","Data":"19f46eef9e10896bc9def7e0ee86e51f313f7f361821696d3dfa0cf1634cfe92"} Oct 04 05:05:13 crc kubenswrapper[4802]: I1004 05:05:13.954293 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 04 05:05:13 crc kubenswrapper[4802]: I1004 05:05:13.962098 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.303129 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.305115 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.308369 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.308559 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.308710 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.308842 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9kppq" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.308896 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.420981 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2df19f5-e90b-4a43-95ea-f4c64f492de6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.421267 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2df19f5-e90b-4a43-95ea-f4c64f492de6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.421402 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2df19f5-e90b-4a43-95ea-f4c64f492de6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.421520 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2df19f5-e90b-4a43-95ea-f4c64f492de6-scripts\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.421678 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chs55\" (UniqueName: \"kubernetes.io/projected/c2df19f5-e90b-4a43-95ea-f4c64f492de6-kube-api-access-chs55\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.421890 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2df19f5-e90b-4a43-95ea-f4c64f492de6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.421940 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2df19f5-e90b-4a43-95ea-f4c64f492de6-config\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.523612 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2df19f5-e90b-4a43-95ea-f4c64f492de6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.523694 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2df19f5-e90b-4a43-95ea-f4c64f492de6-scripts\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.523717 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chs55\" (UniqueName: \"kubernetes.io/projected/c2df19f5-e90b-4a43-95ea-f4c64f492de6-kube-api-access-chs55\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.523768 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2df19f5-e90b-4a43-95ea-f4c64f492de6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.523785 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2df19f5-e90b-4a43-95ea-f4c64f492de6-config\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.523946 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2df19f5-e90b-4a43-95ea-f4c64f492de6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.523981 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2df19f5-e90b-4a43-95ea-f4c64f492de6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.524824 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2df19f5-e90b-4a43-95ea-f4c64f492de6-scripts\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.524940 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2df19f5-e90b-4a43-95ea-f4c64f492de6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.525274 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2df19f5-e90b-4a43-95ea-f4c64f492de6-config\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.531334 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2df19f5-e90b-4a43-95ea-f4c64f492de6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.531498 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2df19f5-e90b-4a43-95ea-f4c64f492de6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.531627 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2df19f5-e90b-4a43-95ea-f4c64f492de6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.539952 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chs55\" (UniqueName: \"kubernetes.io/projected/c2df19f5-e90b-4a43-95ea-f4c64f492de6-kube-api-access-chs55\") pod \"ovn-northd-0\" (UID: \"c2df19f5-e90b-4a43-95ea-f4c64f492de6\") " pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.655531 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.922906 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e6167c18-4dec-48d9-bd81-1a6b6b9e6488","Type":"ContainerStarted","Data":"2d195c046240ed9f622377d5ebf9c349329fba3717081f4dc237e2c02bb8a66b"} Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.925780 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f6d12b3-4eff-47a3-986d-c51e9425f64f","Type":"ContainerStarted","Data":"1f09dfcccc82fee8c3a1aaec1d37774348a099487fb48e063927900bc0dc799b"} Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.952287 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=36.266445166 podStartE2EDuration="45.952265612s" podCreationTimestamp="2025-10-04 05:04:29 +0000 UTC" firstStartedPulling="2025-10-04 05:04:58.489958711 +0000 UTC m=+1140.897959326" lastFinishedPulling="2025-10-04 05:05:08.175779147 +0000 UTC m=+1150.583779772" observedRunningTime="2025-10-04 05:05:14.94346565 +0000 UTC m=+1157.351466295" watchObservedRunningTime="2025-10-04 05:05:14.952265612 +0000 UTC m=+1157.360266237" Oct 04 05:05:14 crc kubenswrapper[4802]: I1004 05:05:14.973354 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=37.030354517 podStartE2EDuration="45.973330824s" podCreationTimestamp="2025-10-04 05:04:29 +0000 UTC" firstStartedPulling="2025-10-04 05:04:58.504873117 +0000 UTC m=+1140.912873742" lastFinishedPulling="2025-10-04 05:05:07.447849424 +0000 UTC m=+1149.855850049" observedRunningTime="2025-10-04 05:05:14.9661915 +0000 UTC m=+1157.374192155" watchObservedRunningTime="2025-10-04 05:05:14.973330824 +0000 UTC m=+1157.381331469" Oct 04 05:05:15 crc kubenswrapper[4802]: I1004 05:05:15.081520 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 04 05:05:15 crc kubenswrapper[4802]: I1004 05:05:15.206790 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:05:15 crc kubenswrapper[4802]: I1004 05:05:15.414993 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:05:15 crc kubenswrapper[4802]: I1004 05:05:15.486949 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-drsh5"] Oct 04 05:05:15 crc kubenswrapper[4802]: I1004 05:05:15.927074 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 04 05:05:15 crc kubenswrapper[4802]: I1004 05:05:15.937304 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c2df19f5-e90b-4a43-95ea-f4c64f492de6","Type":"ContainerStarted","Data":"9a6c106eb42a5662086ecbcb8cb588e228ae930f3d483d48ef41a985fc9f92f3"} Oct 04 05:05:15 crc kubenswrapper[4802]: I1004 05:05:15.937528 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" podUID="02b5b28c-a2e0-4a8a-ac74-b2398e19976d" containerName="dnsmasq-dns" containerID="cri-o://d5c4e10cb5756d071ceb5f39d45ae53bfbccec6a4eff280a67b37b2ee095bf7f" gracePeriod=10 Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.631350 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.803087 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-config\") pod \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.803186 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzs55\" (UniqueName: \"kubernetes.io/projected/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-kube-api-access-lzs55\") pod \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.803387 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-dns-svc\") pod \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.803423 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-ovsdbserver-nb\") pod \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\" (UID: \"02b5b28c-a2e0-4a8a-ac74-b2398e19976d\") " Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.808573 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-kube-api-access-lzs55" (OuterVolumeSpecName: "kube-api-access-lzs55") pod "02b5b28c-a2e0-4a8a-ac74-b2398e19976d" (UID: "02b5b28c-a2e0-4a8a-ac74-b2398e19976d"). InnerVolumeSpecName "kube-api-access-lzs55". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.845399 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-config" (OuterVolumeSpecName: "config") pod "02b5b28c-a2e0-4a8a-ac74-b2398e19976d" (UID: "02b5b28c-a2e0-4a8a-ac74-b2398e19976d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.846505 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "02b5b28c-a2e0-4a8a-ac74-b2398e19976d" (UID: "02b5b28c-a2e0-4a8a-ac74-b2398e19976d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.852761 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02b5b28c-a2e0-4a8a-ac74-b2398e19976d" (UID: "02b5b28c-a2e0-4a8a-ac74-b2398e19976d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.905369 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.905798 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.905812 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.905824 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzs55\" (UniqueName: \"kubernetes.io/projected/02b5b28c-a2e0-4a8a-ac74-b2398e19976d-kube-api-access-lzs55\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.946585 4802 generic.go:334] "Generic (PLEG): container finished" podID="02b5b28c-a2e0-4a8a-ac74-b2398e19976d" containerID="d5c4e10cb5756d071ceb5f39d45ae53bfbccec6a4eff280a67b37b2ee095bf7f" exitCode=0 Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.946658 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" event={"ID":"02b5b28c-a2e0-4a8a-ac74-b2398e19976d","Type":"ContainerDied","Data":"d5c4e10cb5756d071ceb5f39d45ae53bfbccec6a4eff280a67b37b2ee095bf7f"} Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.946675 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.946722 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-drsh5" event={"ID":"02b5b28c-a2e0-4a8a-ac74-b2398e19976d","Type":"ContainerDied","Data":"4896e6bb4480a71e85cb0bd7ad25d3c53023395cf1ca279b671c4655b19fd4c1"} Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.946745 4802 scope.go:117] "RemoveContainer" containerID="d5c4e10cb5756d071ceb5f39d45ae53bfbccec6a4eff280a67b37b2ee095bf7f" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.949459 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c2df19f5-e90b-4a43-95ea-f4c64f492de6","Type":"ContainerStarted","Data":"2f33e47c19f10d7a772075298bf2e97940d9301596ec4f143d821845fb911635"} Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.949512 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c2df19f5-e90b-4a43-95ea-f4c64f492de6","Type":"ContainerStarted","Data":"9574e768068bb6aeee7d7f86b1ace00731a740b8c47a665c10fa7da8c4943f87"} Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.949622 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.973307 4802 scope.go:117] "RemoveContainer" containerID="b30f6e5031ebc8cdab29c2fe3ed7832ed5669aef10613c5611ce6c5905ff163c" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.986099 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.713466517 podStartE2EDuration="2.986076227s" podCreationTimestamp="2025-10-04 05:05:14 +0000 UTC" firstStartedPulling="2025-10-04 05:05:15.09530448 +0000 UTC m=+1157.503305105" lastFinishedPulling="2025-10-04 05:05:16.36791419 +0000 UTC m=+1158.775914815" observedRunningTime="2025-10-04 05:05:16.973876309 +0000 UTC m=+1159.381876944" watchObservedRunningTime="2025-10-04 05:05:16.986076227 +0000 UTC m=+1159.394076852" Oct 04 05:05:16 crc kubenswrapper[4802]: I1004 05:05:16.994376 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-drsh5"] Oct 04 05:05:17 crc kubenswrapper[4802]: I1004 05:05:17.001361 4802 scope.go:117] "RemoveContainer" containerID="d5c4e10cb5756d071ceb5f39d45ae53bfbccec6a4eff280a67b37b2ee095bf7f" Oct 04 05:05:17 crc kubenswrapper[4802]: I1004 05:05:17.001390 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-drsh5"] Oct 04 05:05:17 crc kubenswrapper[4802]: E1004 05:05:17.002103 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5c4e10cb5756d071ceb5f39d45ae53bfbccec6a4eff280a67b37b2ee095bf7f\": container with ID starting with d5c4e10cb5756d071ceb5f39d45ae53bfbccec6a4eff280a67b37b2ee095bf7f not found: ID does not exist" containerID="d5c4e10cb5756d071ceb5f39d45ae53bfbccec6a4eff280a67b37b2ee095bf7f" Oct 04 05:05:17 crc kubenswrapper[4802]: I1004 05:05:17.002155 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5c4e10cb5756d071ceb5f39d45ae53bfbccec6a4eff280a67b37b2ee095bf7f"} err="failed to get container status \"d5c4e10cb5756d071ceb5f39d45ae53bfbccec6a4eff280a67b37b2ee095bf7f\": rpc error: code = NotFound desc = could not find container \"d5c4e10cb5756d071ceb5f39d45ae53bfbccec6a4eff280a67b37b2ee095bf7f\": container with ID starting with d5c4e10cb5756d071ceb5f39d45ae53bfbccec6a4eff280a67b37b2ee095bf7f not found: ID does not exist" Oct 04 05:05:17 crc kubenswrapper[4802]: I1004 05:05:17.002189 4802 scope.go:117] "RemoveContainer" containerID="b30f6e5031ebc8cdab29c2fe3ed7832ed5669aef10613c5611ce6c5905ff163c" Oct 04 05:05:17 crc kubenswrapper[4802]: E1004 05:05:17.002677 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30f6e5031ebc8cdab29c2fe3ed7832ed5669aef10613c5611ce6c5905ff163c\": container with ID starting with b30f6e5031ebc8cdab29c2fe3ed7832ed5669aef10613c5611ce6c5905ff163c not found: ID does not exist" containerID="b30f6e5031ebc8cdab29c2fe3ed7832ed5669aef10613c5611ce6c5905ff163c" Oct 04 05:05:17 crc kubenswrapper[4802]: I1004 05:05:17.002709 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30f6e5031ebc8cdab29c2fe3ed7832ed5669aef10613c5611ce6c5905ff163c"} err="failed to get container status \"b30f6e5031ebc8cdab29c2fe3ed7832ed5669aef10613c5611ce6c5905ff163c\": rpc error: code = NotFound desc = could not find container \"b30f6e5031ebc8cdab29c2fe3ed7832ed5669aef10613c5611ce6c5905ff163c\": container with ID starting with b30f6e5031ebc8cdab29c2fe3ed7832ed5669aef10613c5611ce6c5905ff163c not found: ID does not exist" Oct 04 05:05:18 crc kubenswrapper[4802]: I1004 05:05:18.370908 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b5b28c-a2e0-4a8a-ac74-b2398e19976d" path="/var/lib/kubelet/pods/02b5b28c-a2e0-4a8a-ac74-b2398e19976d/volumes" Oct 04 05:05:20 crc kubenswrapper[4802]: I1004 05:05:20.628761 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 04 05:05:20 crc kubenswrapper[4802]: I1004 05:05:20.629189 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 04 05:05:20 crc kubenswrapper[4802]: I1004 05:05:20.905974 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 04 05:05:20 crc kubenswrapper[4802]: I1004 05:05:20.906117 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 04 05:05:22 crc kubenswrapper[4802]: I1004 05:05:22.593144 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 04 05:05:29 crc kubenswrapper[4802]: I1004 05:05:29.711557 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 04 05:05:30 crc kubenswrapper[4802]: I1004 05:05:30.296020 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 04 05:05:30 crc kubenswrapper[4802]: I1004 05:05:30.337052 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e6167c18-4dec-48d9-bd81-1a6b6b9e6488" containerName="galera" probeResult="failure" output=< Oct 04 05:05:30 crc kubenswrapper[4802]: wsrep_local_state_comment (Joined) differs from Synced Oct 04 05:05:30 crc kubenswrapper[4802]: > Oct 04 05:05:30 crc kubenswrapper[4802]: I1004 05:05:30.671864 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 04 05:05:33 crc kubenswrapper[4802]: I1004 05:05:33.346212 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 04 05:05:33 crc kubenswrapper[4802]: I1004 05:05:33.398513 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="9f6d12b3-4eff-47a3-986d-c51e9425f64f" containerName="galera" probeResult="failure" output=< Oct 04 05:05:33 crc kubenswrapper[4802]: wsrep_local_state_comment (Joined) differs from Synced Oct 04 05:05:33 crc kubenswrapper[4802]: > Oct 04 05:05:36 crc kubenswrapper[4802]: I1004 05:05:36.178675 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2m5vb"] Oct 04 05:05:36 crc kubenswrapper[4802]: E1004 05:05:36.179343 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b5b28c-a2e0-4a8a-ac74-b2398e19976d" containerName="dnsmasq-dns" Oct 04 05:05:36 crc kubenswrapper[4802]: I1004 05:05:36.179359 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b5b28c-a2e0-4a8a-ac74-b2398e19976d" containerName="dnsmasq-dns" Oct 04 05:05:36 crc kubenswrapper[4802]: E1004 05:05:36.179379 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b5b28c-a2e0-4a8a-ac74-b2398e19976d" containerName="init" Oct 04 05:05:36 crc kubenswrapper[4802]: I1004 05:05:36.179387 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b5b28c-a2e0-4a8a-ac74-b2398e19976d" containerName="init" Oct 04 05:05:36 crc kubenswrapper[4802]: I1004 05:05:36.179520 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b5b28c-a2e0-4a8a-ac74-b2398e19976d" containerName="dnsmasq-dns" Oct 04 05:05:36 crc kubenswrapper[4802]: I1004 05:05:36.180069 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2m5vb" Oct 04 05:05:36 crc kubenswrapper[4802]: I1004 05:05:36.189191 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2m5vb"] Oct 04 05:05:36 crc kubenswrapper[4802]: I1004 05:05:36.245744 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b26b\" (UniqueName: \"kubernetes.io/projected/c1ff275a-01d8-4b28-a347-9e246e4582c5-kube-api-access-7b26b\") pod \"glance-db-create-2m5vb\" (UID: \"c1ff275a-01d8-4b28-a347-9e246e4582c5\") " pod="openstack/glance-db-create-2m5vb" Oct 04 05:05:36 crc kubenswrapper[4802]: I1004 05:05:36.347556 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b26b\" (UniqueName: \"kubernetes.io/projected/c1ff275a-01d8-4b28-a347-9e246e4582c5-kube-api-access-7b26b\") pod \"glance-db-create-2m5vb\" (UID: \"c1ff275a-01d8-4b28-a347-9e246e4582c5\") " pod="openstack/glance-db-create-2m5vb" Oct 04 05:05:36 crc kubenswrapper[4802]: I1004 05:05:36.376433 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b26b\" (UniqueName: \"kubernetes.io/projected/c1ff275a-01d8-4b28-a347-9e246e4582c5-kube-api-access-7b26b\") pod \"glance-db-create-2m5vb\" (UID: \"c1ff275a-01d8-4b28-a347-9e246e4582c5\") " pod="openstack/glance-db-create-2m5vb" Oct 04 05:05:36 crc kubenswrapper[4802]: I1004 05:05:36.495479 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2m5vb" Oct 04 05:05:36 crc kubenswrapper[4802]: I1004 05:05:36.937999 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2m5vb"] Oct 04 05:05:37 crc kubenswrapper[4802]: I1004 05:05:37.125860 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2m5vb" event={"ID":"c1ff275a-01d8-4b28-a347-9e246e4582c5","Type":"ContainerStarted","Data":"d42e1d8ade138ee3a12a6511e4c21db704ae14ddd71748b9b039ed73cb75edf8"} Oct 04 05:05:38 crc kubenswrapper[4802]: I1004 05:05:38.135492 4802 generic.go:334] "Generic (PLEG): container finished" podID="c1ff275a-01d8-4b28-a347-9e246e4582c5" containerID="7270b2d21b8872762ff0ed9fa95c9649f68bb1048ee57a0ebaece313a06d75d7" exitCode=0 Oct 04 05:05:38 crc kubenswrapper[4802]: I1004 05:05:38.135562 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2m5vb" event={"ID":"c1ff275a-01d8-4b28-a347-9e246e4582c5","Type":"ContainerDied","Data":"7270b2d21b8872762ff0ed9fa95c9649f68bb1048ee57a0ebaece313a06d75d7"} Oct 04 05:05:39 crc kubenswrapper[4802]: I1004 05:05:39.418032 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2m5vb" Oct 04 05:05:39 crc kubenswrapper[4802]: I1004 05:05:39.497748 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b26b\" (UniqueName: \"kubernetes.io/projected/c1ff275a-01d8-4b28-a347-9e246e4582c5-kube-api-access-7b26b\") pod \"c1ff275a-01d8-4b28-a347-9e246e4582c5\" (UID: \"c1ff275a-01d8-4b28-a347-9e246e4582c5\") " Oct 04 05:05:39 crc kubenswrapper[4802]: I1004 05:05:39.503320 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1ff275a-01d8-4b28-a347-9e246e4582c5-kube-api-access-7b26b" (OuterVolumeSpecName: "kube-api-access-7b26b") pod "c1ff275a-01d8-4b28-a347-9e246e4582c5" (UID: "c1ff275a-01d8-4b28-a347-9e246e4582c5"). InnerVolumeSpecName "kube-api-access-7b26b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:05:39 crc kubenswrapper[4802]: I1004 05:05:39.599883 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b26b\" (UniqueName: \"kubernetes.io/projected/c1ff275a-01d8-4b28-a347-9e246e4582c5-kube-api-access-7b26b\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.153019 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2m5vb" event={"ID":"c1ff275a-01d8-4b28-a347-9e246e4582c5","Type":"ContainerDied","Data":"d42e1d8ade138ee3a12a6511e4c21db704ae14ddd71748b9b039ed73cb75edf8"} Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.153080 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d42e1d8ade138ee3a12a6511e4c21db704ae14ddd71748b9b039ed73cb75edf8" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.153159 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2m5vb" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.233782 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5f56d"] Oct 04 05:05:40 crc kubenswrapper[4802]: E1004 05:05:40.234515 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ff275a-01d8-4b28-a347-9e246e4582c5" containerName="mariadb-database-create" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.234663 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ff275a-01d8-4b28-a347-9e246e4582c5" containerName="mariadb-database-create" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.234982 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ff275a-01d8-4b28-a347-9e246e4582c5" containerName="mariadb-database-create" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.235631 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5f56d" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.243225 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5f56d"] Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.312164 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbm7z\" (UniqueName: \"kubernetes.io/projected/4c1a4ed4-c9df-4edc-888c-4082e207cb07-kube-api-access-lbm7z\") pod \"keystone-db-create-5f56d\" (UID: \"4c1a4ed4-c9df-4edc-888c-4082e207cb07\") " pod="openstack/keystone-db-create-5f56d" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.413781 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbm7z\" (UniqueName: \"kubernetes.io/projected/4c1a4ed4-c9df-4edc-888c-4082e207cb07-kube-api-access-lbm7z\") pod \"keystone-db-create-5f56d\" (UID: \"4c1a4ed4-c9df-4edc-888c-4082e207cb07\") " pod="openstack/keystone-db-create-5f56d" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.428973 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbm7z\" (UniqueName: \"kubernetes.io/projected/4c1a4ed4-c9df-4edc-888c-4082e207cb07-kube-api-access-lbm7z\") pod \"keystone-db-create-5f56d\" (UID: \"4c1a4ed4-c9df-4edc-888c-4082e207cb07\") " pod="openstack/keystone-db-create-5f56d" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.554895 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5f56d" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.711027 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-8cx9k"] Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.712872 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8cx9k" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.717499 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8cx9k"] Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.818542 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb44j\" (UniqueName: \"kubernetes.io/projected/63205f59-5c09-40c6-a124-524f46f70914-kube-api-access-qb44j\") pod \"placement-db-create-8cx9k\" (UID: \"63205f59-5c09-40c6-a124-524f46f70914\") " pod="openstack/placement-db-create-8cx9k" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.920529 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb44j\" (UniqueName: \"kubernetes.io/projected/63205f59-5c09-40c6-a124-524f46f70914-kube-api-access-qb44j\") pod \"placement-db-create-8cx9k\" (UID: \"63205f59-5c09-40c6-a124-524f46f70914\") " pod="openstack/placement-db-create-8cx9k" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.937657 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb44j\" (UniqueName: \"kubernetes.io/projected/63205f59-5c09-40c6-a124-524f46f70914-kube-api-access-qb44j\") pod \"placement-db-create-8cx9k\" (UID: \"63205f59-5c09-40c6-a124-524f46f70914\") " pod="openstack/placement-db-create-8cx9k" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.952279 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 04 05:05:40 crc kubenswrapper[4802]: I1004 05:05:40.995149 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5f56d"] Oct 04 05:05:41 crc kubenswrapper[4802]: I1004 05:05:41.030589 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8cx9k" Oct 04 05:05:41 crc kubenswrapper[4802]: I1004 05:05:41.184493 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5f56d" event={"ID":"4c1a4ed4-c9df-4edc-888c-4082e207cb07","Type":"ContainerStarted","Data":"8f3828afc55ee61d623b94516b0dd0818517e32169d9a4af88fb3a90cbb16204"} Oct 04 05:05:41 crc kubenswrapper[4802]: I1004 05:05:41.566399 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8cx9k"] Oct 04 05:05:41 crc kubenswrapper[4802]: W1004 05:05:41.573979 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63205f59_5c09_40c6_a124_524f46f70914.slice/crio-65f469eac6e90de245dbd3fbc918256e0002962b49bddd0647e95bf84abf1a56 WatchSource:0}: Error finding container 65f469eac6e90de245dbd3fbc918256e0002962b49bddd0647e95bf84abf1a56: Status 404 returned error can't find the container with id 65f469eac6e90de245dbd3fbc918256e0002962b49bddd0647e95bf84abf1a56 Oct 04 05:05:41 crc kubenswrapper[4802]: I1004 05:05:41.703783 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-75hqf" podUID="f41ad0a7-949f-48d9-9871-0ce5c64e8e13" containerName="ovn-controller" probeResult="failure" output=< Oct 04 05:05:41 crc kubenswrapper[4802]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 04 05:05:41 crc kubenswrapper[4802]: > Oct 04 05:05:41 crc kubenswrapper[4802]: I1004 05:05:41.705302 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:05:41 crc kubenswrapper[4802]: I1004 05:05:41.706388 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qkdm6" Oct 04 05:05:41 crc kubenswrapper[4802]: I1004 05:05:41.915874 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-75hqf-config-2qdpr"] Oct 04 05:05:41 crc kubenswrapper[4802]: I1004 05:05:41.916788 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:41 crc kubenswrapper[4802]: I1004 05:05:41.921113 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 04 05:05:41 crc kubenswrapper[4802]: I1004 05:05:41.930522 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75hqf-config-2qdpr"] Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.044045 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91df94aa-c8c6-4fe6-859b-851761fc1007-scripts\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.044270 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91df94aa-c8c6-4fe6-859b-851761fc1007-additional-scripts\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.044296 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-run\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.044348 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-run-ovn\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.044392 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-log-ovn\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.044452 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zspg5\" (UniqueName: \"kubernetes.io/projected/91df94aa-c8c6-4fe6-859b-851761fc1007-kube-api-access-zspg5\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.145971 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91df94aa-c8c6-4fe6-859b-851761fc1007-scripts\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.146030 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91df94aa-c8c6-4fe6-859b-851761fc1007-additional-scripts\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.146069 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-run\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.146097 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-run-ovn\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.146138 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-log-ovn\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.146188 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zspg5\" (UniqueName: \"kubernetes.io/projected/91df94aa-c8c6-4fe6-859b-851761fc1007-kube-api-access-zspg5\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.146460 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-run\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.146541 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-run-ovn\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.146596 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-log-ovn\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.147025 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91df94aa-c8c6-4fe6-859b-851761fc1007-additional-scripts\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.148531 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91df94aa-c8c6-4fe6-859b-851761fc1007-scripts\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.171382 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zspg5\" (UniqueName: \"kubernetes.io/projected/91df94aa-c8c6-4fe6-859b-851761fc1007-kube-api-access-zspg5\") pod \"ovn-controller-75hqf-config-2qdpr\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.193406 4802 generic.go:334] "Generic (PLEG): container finished" podID="4c1a4ed4-c9df-4edc-888c-4082e207cb07" containerID="29cb0a31221ef7d6224ae18fe223f10ecbbda0d8063076f24580c1ae81000fff" exitCode=0 Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.193504 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5f56d" event={"ID":"4c1a4ed4-c9df-4edc-888c-4082e207cb07","Type":"ContainerDied","Data":"29cb0a31221ef7d6224ae18fe223f10ecbbda0d8063076f24580c1ae81000fff"} Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.195287 4802 generic.go:334] "Generic (PLEG): container finished" podID="78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" containerID="a6014bcef3cbb5ec832574558747ef4a77d2d25f64247c10c2a30cebfaee1b8e" exitCode=0 Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.195354 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea","Type":"ContainerDied","Data":"a6014bcef3cbb5ec832574558747ef4a77d2d25f64247c10c2a30cebfaee1b8e"} Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.196766 4802 generic.go:334] "Generic (PLEG): container finished" podID="cf0ca60a-0bbc-41eb-bb00-c32d500506b1" containerID="ac14ea64413095013dde2209208afc2007b001a031ab19bcb37134490027462d" exitCode=0 Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.196806 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf0ca60a-0bbc-41eb-bb00-c32d500506b1","Type":"ContainerDied","Data":"ac14ea64413095013dde2209208afc2007b001a031ab19bcb37134490027462d"} Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.199307 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8cx9k" event={"ID":"63205f59-5c09-40c6-a124-524f46f70914","Type":"ContainerDied","Data":"5ae8b4db53f23af26a2918130b6285349ed2a548267eb703ee0e00573e76e47c"} Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.198998 4802 generic.go:334] "Generic (PLEG): container finished" podID="63205f59-5c09-40c6-a124-524f46f70914" containerID="5ae8b4db53f23af26a2918130b6285349ed2a548267eb703ee0e00573e76e47c" exitCode=0 Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.199662 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8cx9k" event={"ID":"63205f59-5c09-40c6-a124-524f46f70914","Type":"ContainerStarted","Data":"65f469eac6e90de245dbd3fbc918256e0002962b49bddd0647e95bf84abf1a56"} Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.242923 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:42 crc kubenswrapper[4802]: I1004 05:05:42.741759 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75hqf-config-2qdpr"] Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.230475 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75hqf-config-2qdpr" event={"ID":"91df94aa-c8c6-4fe6-859b-851761fc1007","Type":"ContainerStarted","Data":"b192a3822d068e1a924e109c8acc06fa28df0f485892aa2f0bc1c29b137c2f35"} Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.230893 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75hqf-config-2qdpr" event={"ID":"91df94aa-c8c6-4fe6-859b-851761fc1007","Type":"ContainerStarted","Data":"a37a770243e13b5b77ccae692fb42f23ce41961833134b90548590736977a84c"} Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.240784 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea","Type":"ContainerStarted","Data":"474eac2cd2061373c309920f72599c82b718bb290c74a858eeac1043eefc3001"} Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.241808 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.247321 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf0ca60a-0bbc-41eb-bb00-c32d500506b1","Type":"ContainerStarted","Data":"60456b7a1a9871496824f0469c232d699d368b16662160fb332db0b69b7018af"} Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.248891 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.269233 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-75hqf-config-2qdpr" podStartSLOduration=2.269216462 podStartE2EDuration="2.269216462s" podCreationTimestamp="2025-10-04 05:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:05:43.266192596 +0000 UTC m=+1185.674193221" watchObservedRunningTime="2025-10-04 05:05:43.269216462 +0000 UTC m=+1185.677217087" Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.296274 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.122812255 podStartE2EDuration="1m18.29625637s" podCreationTimestamp="2025-10-04 05:04:25 +0000 UTC" firstStartedPulling="2025-10-04 05:04:28.000098018 +0000 UTC m=+1110.408098663" lastFinishedPulling="2025-10-04 05:05:08.173542153 +0000 UTC m=+1150.581542778" observedRunningTime="2025-10-04 05:05:43.291518515 +0000 UTC m=+1185.699519140" watchObservedRunningTime="2025-10-04 05:05:43.29625637 +0000 UTC m=+1185.704256995" Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.323439 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371959.531353 podStartE2EDuration="1m17.323422831s" podCreationTimestamp="2025-10-04 05:04:26 +0000 UTC" firstStartedPulling="2025-10-04 05:04:28.59222386 +0000 UTC m=+1111.000224485" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:05:43.31878562 +0000 UTC m=+1185.726786265" watchObservedRunningTime="2025-10-04 05:05:43.323422831 +0000 UTC m=+1185.731423456" Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.608500 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5f56d" Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.614033 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8cx9k" Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.680200 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb44j\" (UniqueName: \"kubernetes.io/projected/63205f59-5c09-40c6-a124-524f46f70914-kube-api-access-qb44j\") pod \"63205f59-5c09-40c6-a124-524f46f70914\" (UID: \"63205f59-5c09-40c6-a124-524f46f70914\") " Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.680236 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbm7z\" (UniqueName: \"kubernetes.io/projected/4c1a4ed4-c9df-4edc-888c-4082e207cb07-kube-api-access-lbm7z\") pod \"4c1a4ed4-c9df-4edc-888c-4082e207cb07\" (UID: \"4c1a4ed4-c9df-4edc-888c-4082e207cb07\") " Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.688852 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1a4ed4-c9df-4edc-888c-4082e207cb07-kube-api-access-lbm7z" (OuterVolumeSpecName: "kube-api-access-lbm7z") pod "4c1a4ed4-c9df-4edc-888c-4082e207cb07" (UID: "4c1a4ed4-c9df-4edc-888c-4082e207cb07"). InnerVolumeSpecName "kube-api-access-lbm7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.692959 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63205f59-5c09-40c6-a124-524f46f70914-kube-api-access-qb44j" (OuterVolumeSpecName: "kube-api-access-qb44j") pod "63205f59-5c09-40c6-a124-524f46f70914" (UID: "63205f59-5c09-40c6-a124-524f46f70914"). InnerVolumeSpecName "kube-api-access-qb44j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.781505 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb44j\" (UniqueName: \"kubernetes.io/projected/63205f59-5c09-40c6-a124-524f46f70914-kube-api-access-qb44j\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:43 crc kubenswrapper[4802]: I1004 05:05:43.781538 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbm7z\" (UniqueName: \"kubernetes.io/projected/4c1a4ed4-c9df-4edc-888c-4082e207cb07-kube-api-access-lbm7z\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:44 crc kubenswrapper[4802]: I1004 05:05:44.255027 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5f56d" event={"ID":"4c1a4ed4-c9df-4edc-888c-4082e207cb07","Type":"ContainerDied","Data":"8f3828afc55ee61d623b94516b0dd0818517e32169d9a4af88fb3a90cbb16204"} Oct 04 05:05:44 crc kubenswrapper[4802]: I1004 05:05:44.255325 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f3828afc55ee61d623b94516b0dd0818517e32169d9a4af88fb3a90cbb16204" Oct 04 05:05:44 crc kubenswrapper[4802]: I1004 05:05:44.255238 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5f56d" Oct 04 05:05:44 crc kubenswrapper[4802]: I1004 05:05:44.257078 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8cx9k" Oct 04 05:05:44 crc kubenswrapper[4802]: I1004 05:05:44.257174 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8cx9k" event={"ID":"63205f59-5c09-40c6-a124-524f46f70914","Type":"ContainerDied","Data":"65f469eac6e90de245dbd3fbc918256e0002962b49bddd0647e95bf84abf1a56"} Oct 04 05:05:44 crc kubenswrapper[4802]: I1004 05:05:44.257265 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65f469eac6e90de245dbd3fbc918256e0002962b49bddd0647e95bf84abf1a56" Oct 04 05:05:44 crc kubenswrapper[4802]: I1004 05:05:44.258386 4802 generic.go:334] "Generic (PLEG): container finished" podID="91df94aa-c8c6-4fe6-859b-851761fc1007" containerID="b192a3822d068e1a924e109c8acc06fa28df0f485892aa2f0bc1c29b137c2f35" exitCode=0 Oct 04 05:05:44 crc kubenswrapper[4802]: I1004 05:05:44.258414 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75hqf-config-2qdpr" event={"ID":"91df94aa-c8c6-4fe6-859b-851761fc1007","Type":"ContainerDied","Data":"b192a3822d068e1a924e109c8acc06fa28df0f485892aa2f0bc1c29b137c2f35"} Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.529495 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.712183 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91df94aa-c8c6-4fe6-859b-851761fc1007-scripts\") pod \"91df94aa-c8c6-4fe6-859b-851761fc1007\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.712345 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91df94aa-c8c6-4fe6-859b-851761fc1007-additional-scripts\") pod \"91df94aa-c8c6-4fe6-859b-851761fc1007\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.712381 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zspg5\" (UniqueName: \"kubernetes.io/projected/91df94aa-c8c6-4fe6-859b-851761fc1007-kube-api-access-zspg5\") pod \"91df94aa-c8c6-4fe6-859b-851761fc1007\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.712408 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-log-ovn\") pod \"91df94aa-c8c6-4fe6-859b-851761fc1007\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.712478 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-run\") pod \"91df94aa-c8c6-4fe6-859b-851761fc1007\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.712564 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "91df94aa-c8c6-4fe6-859b-851761fc1007" (UID: "91df94aa-c8c6-4fe6-859b-851761fc1007"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.712608 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-run-ovn\") pod \"91df94aa-c8c6-4fe6-859b-851761fc1007\" (UID: \"91df94aa-c8c6-4fe6-859b-851761fc1007\") " Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.712627 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-run" (OuterVolumeSpecName: "var-run") pod "91df94aa-c8c6-4fe6-859b-851761fc1007" (UID: "91df94aa-c8c6-4fe6-859b-851761fc1007"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.712715 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "91df94aa-c8c6-4fe6-859b-851761fc1007" (UID: "91df94aa-c8c6-4fe6-859b-851761fc1007"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.712923 4802 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.712935 4802 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.712944 4802 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91df94aa-c8c6-4fe6-859b-851761fc1007-var-run\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.713012 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91df94aa-c8c6-4fe6-859b-851761fc1007-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "91df94aa-c8c6-4fe6-859b-851761fc1007" (UID: "91df94aa-c8c6-4fe6-859b-851761fc1007"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.713231 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91df94aa-c8c6-4fe6-859b-851761fc1007-scripts" (OuterVolumeSpecName: "scripts") pod "91df94aa-c8c6-4fe6-859b-851761fc1007" (UID: "91df94aa-c8c6-4fe6-859b-851761fc1007"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.717604 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91df94aa-c8c6-4fe6-859b-851761fc1007-kube-api-access-zspg5" (OuterVolumeSpecName: "kube-api-access-zspg5") pod "91df94aa-c8c6-4fe6-859b-851761fc1007" (UID: "91df94aa-c8c6-4fe6-859b-851761fc1007"). InnerVolumeSpecName "kube-api-access-zspg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.813900 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91df94aa-c8c6-4fe6-859b-851761fc1007-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.813933 4802 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91df94aa-c8c6-4fe6-859b-851761fc1007-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:45 crc kubenswrapper[4802]: I1004 05:05:45.813943 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zspg5\" (UniqueName: \"kubernetes.io/projected/91df94aa-c8c6-4fe6-859b-851761fc1007-kube-api-access-zspg5\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.217442 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-77a1-account-create-4qr87"] Oct 04 05:05:46 crc kubenswrapper[4802]: E1004 05:05:46.217925 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63205f59-5c09-40c6-a124-524f46f70914" containerName="mariadb-database-create" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.217955 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="63205f59-5c09-40c6-a124-524f46f70914" containerName="mariadb-database-create" Oct 04 05:05:46 crc kubenswrapper[4802]: E1004 05:05:46.217996 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1a4ed4-c9df-4edc-888c-4082e207cb07" containerName="mariadb-database-create" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.218005 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1a4ed4-c9df-4edc-888c-4082e207cb07" containerName="mariadb-database-create" Oct 04 05:05:46 crc kubenswrapper[4802]: E1004 05:05:46.218021 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91df94aa-c8c6-4fe6-859b-851761fc1007" containerName="ovn-config" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.218029 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="91df94aa-c8c6-4fe6-859b-851761fc1007" containerName="ovn-config" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.218229 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="63205f59-5c09-40c6-a124-524f46f70914" containerName="mariadb-database-create" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.218262 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1a4ed4-c9df-4edc-888c-4082e207cb07" containerName="mariadb-database-create" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.218276 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="91df94aa-c8c6-4fe6-859b-851761fc1007" containerName="ovn-config" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.218920 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-77a1-account-create-4qr87" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.220749 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.227719 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-77a1-account-create-4qr87"] Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.273129 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75hqf-config-2qdpr" event={"ID":"91df94aa-c8c6-4fe6-859b-851761fc1007","Type":"ContainerDied","Data":"a37a770243e13b5b77ccae692fb42f23ce41961833134b90548590736977a84c"} Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.273167 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a37a770243e13b5b77ccae692fb42f23ce41961833134b90548590736977a84c" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.273221 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75hqf-config-2qdpr" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.320785 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbmm\" (UniqueName: \"kubernetes.io/projected/07f0cad5-fa50-4233-b440-1dc9a2afa31e-kube-api-access-qhbmm\") pod \"glance-77a1-account-create-4qr87\" (UID: \"07f0cad5-fa50-4233-b440-1dc9a2afa31e\") " pod="openstack/glance-77a1-account-create-4qr87" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.369584 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-75hqf-config-2qdpr"] Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.369682 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-75hqf-config-2qdpr"] Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.421710 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhbmm\" (UniqueName: \"kubernetes.io/projected/07f0cad5-fa50-4233-b440-1dc9a2afa31e-kube-api-access-qhbmm\") pod \"glance-77a1-account-create-4qr87\" (UID: \"07f0cad5-fa50-4233-b440-1dc9a2afa31e\") " pod="openstack/glance-77a1-account-create-4qr87" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.452328 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhbmm\" (UniqueName: \"kubernetes.io/projected/07f0cad5-fa50-4233-b440-1dc9a2afa31e-kube-api-access-qhbmm\") pod \"glance-77a1-account-create-4qr87\" (UID: \"07f0cad5-fa50-4233-b440-1dc9a2afa31e\") " pod="openstack/glance-77a1-account-create-4qr87" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.468612 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-75hqf-config-cdb7p"] Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.469918 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.472739 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.478175 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75hqf-config-cdb7p"] Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.536140 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-77a1-account-create-4qr87" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.628585 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-log-ovn\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.628703 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32cc553e-caad-432c-aa98-62eb6792c2ac-scripts\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.628789 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-run-ovn\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.628839 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-run\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.628908 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w286s\" (UniqueName: \"kubernetes.io/projected/32cc553e-caad-432c-aa98-62eb6792c2ac-kube-api-access-w286s\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.628935 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32cc553e-caad-432c-aa98-62eb6792c2ac-additional-scripts\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.659547 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-75hqf" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.731449 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-run-ovn\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.731511 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-run\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.731556 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w286s\" (UniqueName: \"kubernetes.io/projected/32cc553e-caad-432c-aa98-62eb6792c2ac-kube-api-access-w286s\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.731577 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32cc553e-caad-432c-aa98-62eb6792c2ac-additional-scripts\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.731601 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-log-ovn\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.731665 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32cc553e-caad-432c-aa98-62eb6792c2ac-scripts\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.732350 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-run\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.732350 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-log-ovn\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.732392 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-run-ovn\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.732658 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32cc553e-caad-432c-aa98-62eb6792c2ac-additional-scripts\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.733511 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32cc553e-caad-432c-aa98-62eb6792c2ac-scripts\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.752170 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w286s\" (UniqueName: \"kubernetes.io/projected/32cc553e-caad-432c-aa98-62eb6792c2ac-kube-api-access-w286s\") pod \"ovn-controller-75hqf-config-cdb7p\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.800597 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:46 crc kubenswrapper[4802]: I1004 05:05:46.855120 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-77a1-account-create-4qr87"] Oct 04 05:05:47 crc kubenswrapper[4802]: I1004 05:05:47.269417 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75hqf-config-cdb7p"] Oct 04 05:05:47 crc kubenswrapper[4802]: W1004 05:05:47.269797 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32cc553e_caad_432c_aa98_62eb6792c2ac.slice/crio-11bf68acc3556328013fe3e4b9f70a078cb59f7a2379d35573af6d03b019442a WatchSource:0}: Error finding container 11bf68acc3556328013fe3e4b9f70a078cb59f7a2379d35573af6d03b019442a: Status 404 returned error can't find the container with id 11bf68acc3556328013fe3e4b9f70a078cb59f7a2379d35573af6d03b019442a Oct 04 05:05:47 crc kubenswrapper[4802]: I1004 05:05:47.287569 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75hqf-config-cdb7p" event={"ID":"32cc553e-caad-432c-aa98-62eb6792c2ac","Type":"ContainerStarted","Data":"11bf68acc3556328013fe3e4b9f70a078cb59f7a2379d35573af6d03b019442a"} Oct 04 05:05:47 crc kubenswrapper[4802]: I1004 05:05:47.288966 4802 generic.go:334] "Generic (PLEG): container finished" podID="07f0cad5-fa50-4233-b440-1dc9a2afa31e" containerID="a458e41944b5197a1326b0e25eb2a7378db1fdee7da19b23915070165725cb7d" exitCode=0 Oct 04 05:05:47 crc kubenswrapper[4802]: I1004 05:05:47.289012 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-77a1-account-create-4qr87" event={"ID":"07f0cad5-fa50-4233-b440-1dc9a2afa31e","Type":"ContainerDied","Data":"a458e41944b5197a1326b0e25eb2a7378db1fdee7da19b23915070165725cb7d"} Oct 04 05:05:47 crc kubenswrapper[4802]: I1004 05:05:47.289035 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-77a1-account-create-4qr87" event={"ID":"07f0cad5-fa50-4233-b440-1dc9a2afa31e","Type":"ContainerStarted","Data":"971b2242664c4ab74a7f8c2ffeb52ba98d239e2263e4f71729dab48cfcc8fb62"} Oct 04 05:05:48 crc kubenswrapper[4802]: I1004 05:05:48.298950 4802 generic.go:334] "Generic (PLEG): container finished" podID="32cc553e-caad-432c-aa98-62eb6792c2ac" containerID="901b85df8417b0551c3d1a54333431ac985324da78e35e3b6af6afaf4bb3b8bc" exitCode=0 Oct 04 05:05:48 crc kubenswrapper[4802]: I1004 05:05:48.299008 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75hqf-config-cdb7p" event={"ID":"32cc553e-caad-432c-aa98-62eb6792c2ac","Type":"ContainerDied","Data":"901b85df8417b0551c3d1a54333431ac985324da78e35e3b6af6afaf4bb3b8bc"} Oct 04 05:05:48 crc kubenswrapper[4802]: I1004 05:05:48.370769 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91df94aa-c8c6-4fe6-859b-851761fc1007" path="/var/lib/kubelet/pods/91df94aa-c8c6-4fe6-859b-851761fc1007/volumes" Oct 04 05:05:48 crc kubenswrapper[4802]: I1004 05:05:48.610705 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-77a1-account-create-4qr87" Oct 04 05:05:48 crc kubenswrapper[4802]: I1004 05:05:48.760737 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhbmm\" (UniqueName: \"kubernetes.io/projected/07f0cad5-fa50-4233-b440-1dc9a2afa31e-kube-api-access-qhbmm\") pod \"07f0cad5-fa50-4233-b440-1dc9a2afa31e\" (UID: \"07f0cad5-fa50-4233-b440-1dc9a2afa31e\") " Oct 04 05:05:48 crc kubenswrapper[4802]: I1004 05:05:48.767549 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f0cad5-fa50-4233-b440-1dc9a2afa31e-kube-api-access-qhbmm" (OuterVolumeSpecName: "kube-api-access-qhbmm") pod "07f0cad5-fa50-4233-b440-1dc9a2afa31e" (UID: "07f0cad5-fa50-4233-b440-1dc9a2afa31e"). InnerVolumeSpecName "kube-api-access-qhbmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:05:48 crc kubenswrapper[4802]: I1004 05:05:48.862833 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhbmm\" (UniqueName: \"kubernetes.io/projected/07f0cad5-fa50-4233-b440-1dc9a2afa31e-kube-api-access-qhbmm\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.310112 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-77a1-account-create-4qr87" event={"ID":"07f0cad5-fa50-4233-b440-1dc9a2afa31e","Type":"ContainerDied","Data":"971b2242664c4ab74a7f8c2ffeb52ba98d239e2263e4f71729dab48cfcc8fb62"} Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.310183 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="971b2242664c4ab74a7f8c2ffeb52ba98d239e2263e4f71729dab48cfcc8fb62" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.310261 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-77a1-account-create-4qr87" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.607762 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.674123 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32cc553e-caad-432c-aa98-62eb6792c2ac-scripts\") pod \"32cc553e-caad-432c-aa98-62eb6792c2ac\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.674207 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-log-ovn\") pod \"32cc553e-caad-432c-aa98-62eb6792c2ac\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.674253 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w286s\" (UniqueName: \"kubernetes.io/projected/32cc553e-caad-432c-aa98-62eb6792c2ac-kube-api-access-w286s\") pod \"32cc553e-caad-432c-aa98-62eb6792c2ac\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.674286 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32cc553e-caad-432c-aa98-62eb6792c2ac-additional-scripts\") pod \"32cc553e-caad-432c-aa98-62eb6792c2ac\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.674312 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-run-ovn\") pod \"32cc553e-caad-432c-aa98-62eb6792c2ac\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.674521 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "32cc553e-caad-432c-aa98-62eb6792c2ac" (UID: "32cc553e-caad-432c-aa98-62eb6792c2ac"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.674572 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "32cc553e-caad-432c-aa98-62eb6792c2ac" (UID: "32cc553e-caad-432c-aa98-62eb6792c2ac"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.674629 4802 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.674652 4802 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.675415 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32cc553e-caad-432c-aa98-62eb6792c2ac-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "32cc553e-caad-432c-aa98-62eb6792c2ac" (UID: "32cc553e-caad-432c-aa98-62eb6792c2ac"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.675797 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32cc553e-caad-432c-aa98-62eb6792c2ac-scripts" (OuterVolumeSpecName: "scripts") pod "32cc553e-caad-432c-aa98-62eb6792c2ac" (UID: "32cc553e-caad-432c-aa98-62eb6792c2ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.681869 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32cc553e-caad-432c-aa98-62eb6792c2ac-kube-api-access-w286s" (OuterVolumeSpecName: "kube-api-access-w286s") pod "32cc553e-caad-432c-aa98-62eb6792c2ac" (UID: "32cc553e-caad-432c-aa98-62eb6792c2ac"). InnerVolumeSpecName "kube-api-access-w286s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.775543 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-run\") pod \"32cc553e-caad-432c-aa98-62eb6792c2ac\" (UID: \"32cc553e-caad-432c-aa98-62eb6792c2ac\") " Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.775838 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-run" (OuterVolumeSpecName: "var-run") pod "32cc553e-caad-432c-aa98-62eb6792c2ac" (UID: "32cc553e-caad-432c-aa98-62eb6792c2ac"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.776105 4802 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32cc553e-caad-432c-aa98-62eb6792c2ac-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.776131 4802 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32cc553e-caad-432c-aa98-62eb6792c2ac-var-run\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.776142 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32cc553e-caad-432c-aa98-62eb6792c2ac-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:49 crc kubenswrapper[4802]: I1004 05:05:49.776154 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w286s\" (UniqueName: \"kubernetes.io/projected/32cc553e-caad-432c-aa98-62eb6792c2ac-kube-api-access-w286s\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.321906 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75hqf-config-cdb7p" event={"ID":"32cc553e-caad-432c-aa98-62eb6792c2ac","Type":"ContainerDied","Data":"11bf68acc3556328013fe3e4b9f70a078cb59f7a2379d35573af6d03b019442a"} Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.321954 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11bf68acc3556328013fe3e4b9f70a078cb59f7a2379d35573af6d03b019442a" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.321978 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75hqf-config-cdb7p" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.452450 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7995-account-create-ccmzc"] Oct 04 05:05:50 crc kubenswrapper[4802]: E1004 05:05:50.452874 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32cc553e-caad-432c-aa98-62eb6792c2ac" containerName="ovn-config" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.452897 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="32cc553e-caad-432c-aa98-62eb6792c2ac" containerName="ovn-config" Oct 04 05:05:50 crc kubenswrapper[4802]: E1004 05:05:50.452920 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f0cad5-fa50-4233-b440-1dc9a2afa31e" containerName="mariadb-account-create" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.452930 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f0cad5-fa50-4233-b440-1dc9a2afa31e" containerName="mariadb-account-create" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.453131 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f0cad5-fa50-4233-b440-1dc9a2afa31e" containerName="mariadb-account-create" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.453160 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="32cc553e-caad-432c-aa98-62eb6792c2ac" containerName="ovn-config" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.453830 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7995-account-create-ccmzc" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.456606 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.460578 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7995-account-create-ccmzc"] Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.587133 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwmxl\" (UniqueName: \"kubernetes.io/projected/80f65253-f4a0-477e-a6f1-9773fde497b5-kube-api-access-mwmxl\") pod \"keystone-7995-account-create-ccmzc\" (UID: \"80f65253-f4a0-477e-a6f1-9773fde497b5\") " pod="openstack/keystone-7995-account-create-ccmzc" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.672363 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-75hqf-config-cdb7p"] Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.677438 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-75hqf-config-cdb7p"] Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.688443 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwmxl\" (UniqueName: \"kubernetes.io/projected/80f65253-f4a0-477e-a6f1-9773fde497b5-kube-api-access-mwmxl\") pod \"keystone-7995-account-create-ccmzc\" (UID: \"80f65253-f4a0-477e-a6f1-9773fde497b5\") " pod="openstack/keystone-7995-account-create-ccmzc" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.706140 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwmxl\" (UniqueName: \"kubernetes.io/projected/80f65253-f4a0-477e-a6f1-9773fde497b5-kube-api-access-mwmxl\") pod \"keystone-7995-account-create-ccmzc\" (UID: \"80f65253-f4a0-477e-a6f1-9773fde497b5\") " pod="openstack/keystone-7995-account-create-ccmzc" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.782723 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7995-account-create-ccmzc" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.896039 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-cf51-account-create-hpqs2"] Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.897889 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cf51-account-create-hpqs2" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.901338 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.902347 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bxhk\" (UniqueName: \"kubernetes.io/projected/419d0cb8-1278-4521-a0b6-207795fdd75e-kube-api-access-6bxhk\") pod \"placement-cf51-account-create-hpqs2\" (UID: \"419d0cb8-1278-4521-a0b6-207795fdd75e\") " pod="openstack/placement-cf51-account-create-hpqs2" Oct 04 05:05:50 crc kubenswrapper[4802]: I1004 05:05:50.917556 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cf51-account-create-hpqs2"] Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.003887 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bxhk\" (UniqueName: \"kubernetes.io/projected/419d0cb8-1278-4521-a0b6-207795fdd75e-kube-api-access-6bxhk\") pod \"placement-cf51-account-create-hpqs2\" (UID: \"419d0cb8-1278-4521-a0b6-207795fdd75e\") " pod="openstack/placement-cf51-account-create-hpqs2" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.026063 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bxhk\" (UniqueName: \"kubernetes.io/projected/419d0cb8-1278-4521-a0b6-207795fdd75e-kube-api-access-6bxhk\") pod \"placement-cf51-account-create-hpqs2\" (UID: \"419d0cb8-1278-4521-a0b6-207795fdd75e\") " pod="openstack/placement-cf51-account-create-hpqs2" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.205555 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7995-account-create-ccmzc"] Oct 04 05:05:51 crc kubenswrapper[4802]: W1004 05:05:51.210569 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80f65253_f4a0_477e_a6f1_9773fde497b5.slice/crio-989cb9f84b332741e6ad6b8d269afd346c51a0142c619baceb826f208c473f28 WatchSource:0}: Error finding container 989cb9f84b332741e6ad6b8d269afd346c51a0142c619baceb826f208c473f28: Status 404 returned error can't find the container with id 989cb9f84b332741e6ad6b8d269afd346c51a0142c619baceb826f208c473f28 Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.236962 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cf51-account-create-hpqs2" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.333312 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7995-account-create-ccmzc" event={"ID":"80f65253-f4a0-477e-a6f1-9773fde497b5","Type":"ContainerStarted","Data":"989cb9f84b332741e6ad6b8d269afd346c51a0142c619baceb826f208c473f28"} Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.452289 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-rn59p"] Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.453863 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.457430 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.457770 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-x45mr" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.458349 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rn59p"] Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.515005 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-combined-ca-bundle\") pod \"glance-db-sync-rn59p\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.515078 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-db-sync-config-data\") pod \"glance-db-sync-rn59p\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.515249 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7r7n\" (UniqueName: \"kubernetes.io/projected/ed629005-3761-4623-99e5-723e05932230-kube-api-access-x7r7n\") pod \"glance-db-sync-rn59p\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.515314 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-config-data\") pod \"glance-db-sync-rn59p\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.616472 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-db-sync-config-data\") pod \"glance-db-sync-rn59p\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.616542 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7r7n\" (UniqueName: \"kubernetes.io/projected/ed629005-3761-4623-99e5-723e05932230-kube-api-access-x7r7n\") pod \"glance-db-sync-rn59p\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.616568 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-config-data\") pod \"glance-db-sync-rn59p\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.616631 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-combined-ca-bundle\") pod \"glance-db-sync-rn59p\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.622822 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-config-data\") pod \"glance-db-sync-rn59p\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.622824 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-db-sync-config-data\") pod \"glance-db-sync-rn59p\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.624746 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-combined-ca-bundle\") pod \"glance-db-sync-rn59p\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.635283 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7r7n\" (UniqueName: \"kubernetes.io/projected/ed629005-3761-4623-99e5-723e05932230-kube-api-access-x7r7n\") pod \"glance-db-sync-rn59p\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.682031 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cf51-account-create-hpqs2"] Oct 04 05:05:51 crc kubenswrapper[4802]: W1004 05:05:51.696815 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod419d0cb8_1278_4521_a0b6_207795fdd75e.slice/crio-ca5973b88c959f4f25197ebecc914725e8102ab64676b5026582bd6f067b8a7a WatchSource:0}: Error finding container ca5973b88c959f4f25197ebecc914725e8102ab64676b5026582bd6f067b8a7a: Status 404 returned error can't find the container with id ca5973b88c959f4f25197ebecc914725e8102ab64676b5026582bd6f067b8a7a Oct 04 05:05:51 crc kubenswrapper[4802]: I1004 05:05:51.774298 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rn59p" Oct 04 05:05:52 crc kubenswrapper[4802]: I1004 05:05:52.297263 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rn59p"] Oct 04 05:05:52 crc kubenswrapper[4802]: I1004 05:05:52.344864 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cf51-account-create-hpqs2" event={"ID":"419d0cb8-1278-4521-a0b6-207795fdd75e","Type":"ContainerStarted","Data":"4d77bf6a9a71e08a591449c56506a2e3d8be7ac379462f9cffc53d05dd0ca081"} Oct 04 05:05:52 crc kubenswrapper[4802]: I1004 05:05:52.344907 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cf51-account-create-hpqs2" event={"ID":"419d0cb8-1278-4521-a0b6-207795fdd75e","Type":"ContainerStarted","Data":"ca5973b88c959f4f25197ebecc914725e8102ab64676b5026582bd6f067b8a7a"} Oct 04 05:05:52 crc kubenswrapper[4802]: I1004 05:05:52.346394 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rn59p" event={"ID":"ed629005-3761-4623-99e5-723e05932230","Type":"ContainerStarted","Data":"17262f724c7c0274512ff795e67163afe5c61b9990566e44c05099fd3cc3ab88"} Oct 04 05:05:52 crc kubenswrapper[4802]: I1004 05:05:52.348095 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7995-account-create-ccmzc" event={"ID":"80f65253-f4a0-477e-a6f1-9773fde497b5","Type":"ContainerStarted","Data":"a10810ffa7d524cb8fbff339ea49a2ce39578cb3c79149b6e1a7816a26d72c7e"} Oct 04 05:05:52 crc kubenswrapper[4802]: I1004 05:05:52.365057 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7995-account-create-ccmzc" podStartSLOduration=2.365042109 podStartE2EDuration="2.365042109s" podCreationTimestamp="2025-10-04 05:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:05:52.359542922 +0000 UTC m=+1194.767543547" watchObservedRunningTime="2025-10-04 05:05:52.365042109 +0000 UTC m=+1194.773042734" Oct 04 05:05:52 crc kubenswrapper[4802]: I1004 05:05:52.370266 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32cc553e-caad-432c-aa98-62eb6792c2ac" path="/var/lib/kubelet/pods/32cc553e-caad-432c-aa98-62eb6792c2ac/volumes" Oct 04 05:05:54 crc kubenswrapper[4802]: I1004 05:05:54.369765 4802 generic.go:334] "Generic (PLEG): container finished" podID="80f65253-f4a0-477e-a6f1-9773fde497b5" containerID="a10810ffa7d524cb8fbff339ea49a2ce39578cb3c79149b6e1a7816a26d72c7e" exitCode=0 Oct 04 05:05:54 crc kubenswrapper[4802]: I1004 05:05:54.370433 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7995-account-create-ccmzc" event={"ID":"80f65253-f4a0-477e-a6f1-9773fde497b5","Type":"ContainerDied","Data":"a10810ffa7d524cb8fbff339ea49a2ce39578cb3c79149b6e1a7816a26d72c7e"} Oct 04 05:05:54 crc kubenswrapper[4802]: I1004 05:05:54.372099 4802 generic.go:334] "Generic (PLEG): container finished" podID="419d0cb8-1278-4521-a0b6-207795fdd75e" containerID="4d77bf6a9a71e08a591449c56506a2e3d8be7ac379462f9cffc53d05dd0ca081" exitCode=0 Oct 04 05:05:54 crc kubenswrapper[4802]: I1004 05:05:54.372168 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cf51-account-create-hpqs2" event={"ID":"419d0cb8-1278-4521-a0b6-207795fdd75e","Type":"ContainerDied","Data":"4d77bf6a9a71e08a591449c56506a2e3d8be7ac379462f9cffc53d05dd0ca081"} Oct 04 05:05:54 crc kubenswrapper[4802]: I1004 05:05:54.402898 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-cf51-account-create-hpqs2" podStartSLOduration=4.402875615 podStartE2EDuration="4.402875615s" podCreationTimestamp="2025-10-04 05:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:05:53.375290526 +0000 UTC m=+1195.783291161" watchObservedRunningTime="2025-10-04 05:05:54.402875615 +0000 UTC m=+1196.810876260" Oct 04 05:05:55 crc kubenswrapper[4802]: I1004 05:05:55.775328 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cf51-account-create-hpqs2" Oct 04 05:05:55 crc kubenswrapper[4802]: I1004 05:05:55.780825 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7995-account-create-ccmzc" Oct 04 05:05:55 crc kubenswrapper[4802]: I1004 05:05:55.905544 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwmxl\" (UniqueName: \"kubernetes.io/projected/80f65253-f4a0-477e-a6f1-9773fde497b5-kube-api-access-mwmxl\") pod \"80f65253-f4a0-477e-a6f1-9773fde497b5\" (UID: \"80f65253-f4a0-477e-a6f1-9773fde497b5\") " Oct 04 05:05:55 crc kubenswrapper[4802]: I1004 05:05:55.905704 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bxhk\" (UniqueName: \"kubernetes.io/projected/419d0cb8-1278-4521-a0b6-207795fdd75e-kube-api-access-6bxhk\") pod \"419d0cb8-1278-4521-a0b6-207795fdd75e\" (UID: \"419d0cb8-1278-4521-a0b6-207795fdd75e\") " Oct 04 05:05:55 crc kubenswrapper[4802]: I1004 05:05:55.911840 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/419d0cb8-1278-4521-a0b6-207795fdd75e-kube-api-access-6bxhk" (OuterVolumeSpecName: "kube-api-access-6bxhk") pod "419d0cb8-1278-4521-a0b6-207795fdd75e" (UID: "419d0cb8-1278-4521-a0b6-207795fdd75e"). InnerVolumeSpecName "kube-api-access-6bxhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:05:55 crc kubenswrapper[4802]: I1004 05:05:55.917812 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f65253-f4a0-477e-a6f1-9773fde497b5-kube-api-access-mwmxl" (OuterVolumeSpecName: "kube-api-access-mwmxl") pod "80f65253-f4a0-477e-a6f1-9773fde497b5" (UID: "80f65253-f4a0-477e-a6f1-9773fde497b5"). InnerVolumeSpecName "kube-api-access-mwmxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:05:56 crc kubenswrapper[4802]: I1004 05:05:56.008000 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bxhk\" (UniqueName: \"kubernetes.io/projected/419d0cb8-1278-4521-a0b6-207795fdd75e-kube-api-access-6bxhk\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:56 crc kubenswrapper[4802]: I1004 05:05:56.008045 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwmxl\" (UniqueName: \"kubernetes.io/projected/80f65253-f4a0-477e-a6f1-9773fde497b5-kube-api-access-mwmxl\") on node \"crc\" DevicePath \"\"" Oct 04 05:05:56 crc kubenswrapper[4802]: I1004 05:05:56.411286 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7995-account-create-ccmzc" Oct 04 05:05:56 crc kubenswrapper[4802]: I1004 05:05:56.411371 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7995-account-create-ccmzc" event={"ID":"80f65253-f4a0-477e-a6f1-9773fde497b5","Type":"ContainerDied","Data":"989cb9f84b332741e6ad6b8d269afd346c51a0142c619baceb826f208c473f28"} Oct 04 05:05:56 crc kubenswrapper[4802]: I1004 05:05:56.411819 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="989cb9f84b332741e6ad6b8d269afd346c51a0142c619baceb826f208c473f28" Oct 04 05:05:56 crc kubenswrapper[4802]: I1004 05:05:56.416466 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cf51-account-create-hpqs2" event={"ID":"419d0cb8-1278-4521-a0b6-207795fdd75e","Type":"ContainerDied","Data":"ca5973b88c959f4f25197ebecc914725e8102ab64676b5026582bd6f067b8a7a"} Oct 04 05:05:56 crc kubenswrapper[4802]: I1004 05:05:56.416508 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca5973b88c959f4f25197ebecc914725e8102ab64676b5026582bd6f067b8a7a" Oct 04 05:05:56 crc kubenswrapper[4802]: I1004 05:05:56.416597 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cf51-account-create-hpqs2" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.385971 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.672139 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-vqjw4"] Oct 04 05:05:57 crc kubenswrapper[4802]: E1004 05:05:57.672741 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f65253-f4a0-477e-a6f1-9773fde497b5" containerName="mariadb-account-create" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.672760 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f65253-f4a0-477e-a6f1-9773fde497b5" containerName="mariadb-account-create" Oct 04 05:05:57 crc kubenswrapper[4802]: E1004 05:05:57.672774 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419d0cb8-1278-4521-a0b6-207795fdd75e" containerName="mariadb-account-create" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.672780 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="419d0cb8-1278-4521-a0b6-207795fdd75e" containerName="mariadb-account-create" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.672920 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f65253-f4a0-477e-a6f1-9773fde497b5" containerName="mariadb-account-create" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.672943 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="419d0cb8-1278-4521-a0b6-207795fdd75e" containerName="mariadb-account-create" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.673417 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vqjw4" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.685750 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vqjw4"] Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.779909 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-stbxp"] Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.781140 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-stbxp" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.791886 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-stbxp"] Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.835384 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5v5w\" (UniqueName: \"kubernetes.io/projected/ad98d92d-f119-4120-a3ca-e309fa442279-kube-api-access-b5v5w\") pod \"cinder-db-create-vqjw4\" (UID: \"ad98d92d-f119-4120-a3ca-e309fa442279\") " pod="openstack/cinder-db-create-vqjw4" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.853991 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.937109 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5v5w\" (UniqueName: \"kubernetes.io/projected/ad98d92d-f119-4120-a3ca-e309fa442279-kube-api-access-b5v5w\") pod \"cinder-db-create-vqjw4\" (UID: \"ad98d92d-f119-4120-a3ca-e309fa442279\") " pod="openstack/cinder-db-create-vqjw4" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.937182 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6wsh\" (UniqueName: \"kubernetes.io/projected/edce7589-9918-41f7-9d90-c5463388138f-kube-api-access-g6wsh\") pod \"barbican-db-create-stbxp\" (UID: \"edce7589-9918-41f7-9d90-c5463388138f\") " pod="openstack/barbican-db-create-stbxp" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.973976 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5v5w\" (UniqueName: \"kubernetes.io/projected/ad98d92d-f119-4120-a3ca-e309fa442279-kube-api-access-b5v5w\") pod \"cinder-db-create-vqjw4\" (UID: \"ad98d92d-f119-4120-a3ca-e309fa442279\") " pod="openstack/cinder-db-create-vqjw4" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.984057 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-qb9xs"] Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.985578 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qb9xs" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.995206 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vqjw4" Oct 04 05:05:57 crc kubenswrapper[4802]: I1004 05:05:57.997735 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qb9xs"] Oct 04 05:05:58 crc kubenswrapper[4802]: I1004 05:05:58.040451 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6wsh\" (UniqueName: \"kubernetes.io/projected/edce7589-9918-41f7-9d90-c5463388138f-kube-api-access-g6wsh\") pod \"barbican-db-create-stbxp\" (UID: \"edce7589-9918-41f7-9d90-c5463388138f\") " pod="openstack/barbican-db-create-stbxp" Oct 04 05:05:58 crc kubenswrapper[4802]: I1004 05:05:58.072472 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6wsh\" (UniqueName: \"kubernetes.io/projected/edce7589-9918-41f7-9d90-c5463388138f-kube-api-access-g6wsh\") pod \"barbican-db-create-stbxp\" (UID: \"edce7589-9918-41f7-9d90-c5463388138f\") " pod="openstack/barbican-db-create-stbxp" Oct 04 05:05:58 crc kubenswrapper[4802]: I1004 05:05:58.102318 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-stbxp" Oct 04 05:05:58 crc kubenswrapper[4802]: I1004 05:05:58.142206 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f4wb\" (UniqueName: \"kubernetes.io/projected/806dfe15-9b99-4889-9bfe-4202e609e41a-kube-api-access-6f4wb\") pod \"neutron-db-create-qb9xs\" (UID: \"806dfe15-9b99-4889-9bfe-4202e609e41a\") " pod="openstack/neutron-db-create-qb9xs" Oct 04 05:05:58 crc kubenswrapper[4802]: I1004 05:05:58.243606 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f4wb\" (UniqueName: \"kubernetes.io/projected/806dfe15-9b99-4889-9bfe-4202e609e41a-kube-api-access-6f4wb\") pod \"neutron-db-create-qb9xs\" (UID: \"806dfe15-9b99-4889-9bfe-4202e609e41a\") " pod="openstack/neutron-db-create-qb9xs" Oct 04 05:05:58 crc kubenswrapper[4802]: I1004 05:05:58.263884 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f4wb\" (UniqueName: \"kubernetes.io/projected/806dfe15-9b99-4889-9bfe-4202e609e41a-kube-api-access-6f4wb\") pod \"neutron-db-create-qb9xs\" (UID: \"806dfe15-9b99-4889-9bfe-4202e609e41a\") " pod="openstack/neutron-db-create-qb9xs" Oct 04 05:05:58 crc kubenswrapper[4802]: I1004 05:05:58.337347 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qb9xs" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.036133 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-l46qt"] Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.037428 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-l46qt"] Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.037519 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l46qt" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.040419 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.042142 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.042217 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fxp4p" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.043636 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.189877 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-config-data\") pod \"keystone-db-sync-l46qt\" (UID: \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\") " pod="openstack/keystone-db-sync-l46qt" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.189939 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-combined-ca-bundle\") pod \"keystone-db-sync-l46qt\" (UID: \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\") " pod="openstack/keystone-db-sync-l46qt" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.190060 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkk8v\" (UniqueName: \"kubernetes.io/projected/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-kube-api-access-vkk8v\") pod \"keystone-db-sync-l46qt\" (UID: \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\") " pod="openstack/keystone-db-sync-l46qt" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.291601 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-config-data\") pod \"keystone-db-sync-l46qt\" (UID: \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\") " pod="openstack/keystone-db-sync-l46qt" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.291672 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-combined-ca-bundle\") pod \"keystone-db-sync-l46qt\" (UID: \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\") " pod="openstack/keystone-db-sync-l46qt" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.291783 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkk8v\" (UniqueName: \"kubernetes.io/projected/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-kube-api-access-vkk8v\") pod \"keystone-db-sync-l46qt\" (UID: \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\") " pod="openstack/keystone-db-sync-l46qt" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.297831 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-config-data\") pod \"keystone-db-sync-l46qt\" (UID: \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\") " pod="openstack/keystone-db-sync-l46qt" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.298365 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-combined-ca-bundle\") pod \"keystone-db-sync-l46qt\" (UID: \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\") " pod="openstack/keystone-db-sync-l46qt" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.308973 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkk8v\" (UniqueName: \"kubernetes.io/projected/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-kube-api-access-vkk8v\") pod \"keystone-db-sync-l46qt\" (UID: \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\") " pod="openstack/keystone-db-sync-l46qt" Oct 04 05:06:01 crc kubenswrapper[4802]: I1004 05:06:01.391304 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l46qt" Oct 04 05:06:04 crc kubenswrapper[4802]: I1004 05:06:04.839994 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-stbxp"] Oct 04 05:06:04 crc kubenswrapper[4802]: W1004 05:06:04.843474 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad98d92d_f119_4120_a3ca_e309fa442279.slice/crio-c9c488a9941f3f5642b7917b85cac572f5b96c89d7dfafead4b03984fc629208 WatchSource:0}: Error finding container c9c488a9941f3f5642b7917b85cac572f5b96c89d7dfafead4b03984fc629208: Status 404 returned error can't find the container with id c9c488a9941f3f5642b7917b85cac572f5b96c89d7dfafead4b03984fc629208 Oct 04 05:06:04 crc kubenswrapper[4802]: W1004 05:06:04.844762 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedce7589_9918_41f7_9d90_c5463388138f.slice/crio-ed4af77c5385be22ccb0c4980c2715ebc6b0c26c325d63ef5d953a82ef1395e9 WatchSource:0}: Error finding container ed4af77c5385be22ccb0c4980c2715ebc6b0c26c325d63ef5d953a82ef1395e9: Status 404 returned error can't find the container with id ed4af77c5385be22ccb0c4980c2715ebc6b0c26c325d63ef5d953a82ef1395e9 Oct 04 05:06:04 crc kubenswrapper[4802]: I1004 05:06:04.847182 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vqjw4"] Oct 04 05:06:04 crc kubenswrapper[4802]: I1004 05:06:04.984074 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-l46qt"] Oct 04 05:06:05 crc kubenswrapper[4802]: I1004 05:06:05.001600 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qb9xs"] Oct 04 05:06:05 crc kubenswrapper[4802]: W1004 05:06:05.007411 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf144c400_3cd4_4933_a2c6_ab57e96bb6d9.slice/crio-9f72574feabdfd3547f392f51b9f0cb5185a41bf6a09582e7ad2ec6f64e9f1d5 WatchSource:0}: Error finding container 9f72574feabdfd3547f392f51b9f0cb5185a41bf6a09582e7ad2ec6f64e9f1d5: Status 404 returned error can't find the container with id 9f72574feabdfd3547f392f51b9f0cb5185a41bf6a09582e7ad2ec6f64e9f1d5 Oct 04 05:06:05 crc kubenswrapper[4802]: W1004 05:06:05.030474 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod806dfe15_9b99_4889_9bfe_4202e609e41a.slice/crio-76913e8b3ce6add1ae976fbe1bf99a617b85a1cc04c4cca2023803c1c4dc77a5 WatchSource:0}: Error finding container 76913e8b3ce6add1ae976fbe1bf99a617b85a1cc04c4cca2023803c1c4dc77a5: Status 404 returned error can't find the container with id 76913e8b3ce6add1ae976fbe1bf99a617b85a1cc04c4cca2023803c1c4dc77a5 Oct 04 05:06:05 crc kubenswrapper[4802]: I1004 05:06:05.515527 4802 generic.go:334] "Generic (PLEG): container finished" podID="806dfe15-9b99-4889-9bfe-4202e609e41a" containerID="ff284b4fa3949ed045dd522e2605da57eefd71949992099d58be1aae7ac0273c" exitCode=0 Oct 04 05:06:05 crc kubenswrapper[4802]: I1004 05:06:05.515895 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qb9xs" event={"ID":"806dfe15-9b99-4889-9bfe-4202e609e41a","Type":"ContainerDied","Data":"ff284b4fa3949ed045dd522e2605da57eefd71949992099d58be1aae7ac0273c"} Oct 04 05:06:05 crc kubenswrapper[4802]: I1004 05:06:05.515921 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qb9xs" event={"ID":"806dfe15-9b99-4889-9bfe-4202e609e41a","Type":"ContainerStarted","Data":"76913e8b3ce6add1ae976fbe1bf99a617b85a1cc04c4cca2023803c1c4dc77a5"} Oct 04 05:06:05 crc kubenswrapper[4802]: I1004 05:06:05.517849 4802 generic.go:334] "Generic (PLEG): container finished" podID="ad98d92d-f119-4120-a3ca-e309fa442279" containerID="bf4169ee1e91fde9d1c7ee7b42a6a3db1082e54d764d9667358530e3a965c51f" exitCode=0 Oct 04 05:06:05 crc kubenswrapper[4802]: I1004 05:06:05.518012 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vqjw4" event={"ID":"ad98d92d-f119-4120-a3ca-e309fa442279","Type":"ContainerDied","Data":"bf4169ee1e91fde9d1c7ee7b42a6a3db1082e54d764d9667358530e3a965c51f"} Oct 04 05:06:05 crc kubenswrapper[4802]: I1004 05:06:05.518066 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vqjw4" event={"ID":"ad98d92d-f119-4120-a3ca-e309fa442279","Type":"ContainerStarted","Data":"c9c488a9941f3f5642b7917b85cac572f5b96c89d7dfafead4b03984fc629208"} Oct 04 05:06:05 crc kubenswrapper[4802]: I1004 05:06:05.522436 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rn59p" event={"ID":"ed629005-3761-4623-99e5-723e05932230","Type":"ContainerStarted","Data":"9686717e6b5c3a23b52989cd4006df86798432aaf61867e64f4aa3c0689052db"} Oct 04 05:06:05 crc kubenswrapper[4802]: I1004 05:06:05.527219 4802 generic.go:334] "Generic (PLEG): container finished" podID="edce7589-9918-41f7-9d90-c5463388138f" containerID="f36c35a3acade327eb26707957db318e22130b8614b8b60df3cb392eed9d7fd4" exitCode=0 Oct 04 05:06:05 crc kubenswrapper[4802]: I1004 05:06:05.527286 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-stbxp" event={"ID":"edce7589-9918-41f7-9d90-c5463388138f","Type":"ContainerDied","Data":"f36c35a3acade327eb26707957db318e22130b8614b8b60df3cb392eed9d7fd4"} Oct 04 05:06:05 crc kubenswrapper[4802]: I1004 05:06:05.527321 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-stbxp" event={"ID":"edce7589-9918-41f7-9d90-c5463388138f","Type":"ContainerStarted","Data":"ed4af77c5385be22ccb0c4980c2715ebc6b0c26c325d63ef5d953a82ef1395e9"} Oct 04 05:06:05 crc kubenswrapper[4802]: I1004 05:06:05.531743 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l46qt" event={"ID":"f144c400-3cd4-4933-a2c6-ab57e96bb6d9","Type":"ContainerStarted","Data":"9f72574feabdfd3547f392f51b9f0cb5185a41bf6a09582e7ad2ec6f64e9f1d5"} Oct 04 05:06:05 crc kubenswrapper[4802]: I1004 05:06:05.564378 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-rn59p" podStartSLOduration=2.440700793 podStartE2EDuration="14.564359579s" podCreationTimestamp="2025-10-04 05:05:51 +0000 UTC" firstStartedPulling="2025-10-04 05:05:52.303959443 +0000 UTC m=+1194.711960068" lastFinishedPulling="2025-10-04 05:06:04.427618229 +0000 UTC m=+1206.835618854" observedRunningTime="2025-10-04 05:06:05.557835394 +0000 UTC m=+1207.965836019" watchObservedRunningTime="2025-10-04 05:06:05.564359579 +0000 UTC m=+1207.972360204" Oct 04 05:06:08 crc kubenswrapper[4802]: I1004 05:06:08.904674 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vqjw4" Oct 04 05:06:08 crc kubenswrapper[4802]: I1004 05:06:08.980395 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qb9xs" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.042188 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-stbxp" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.042192 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5v5w\" (UniqueName: \"kubernetes.io/projected/ad98d92d-f119-4120-a3ca-e309fa442279-kube-api-access-b5v5w\") pod \"ad98d92d-f119-4120-a3ca-e309fa442279\" (UID: \"ad98d92d-f119-4120-a3ca-e309fa442279\") " Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.061526 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad98d92d-f119-4120-a3ca-e309fa442279-kube-api-access-b5v5w" (OuterVolumeSpecName: "kube-api-access-b5v5w") pod "ad98d92d-f119-4120-a3ca-e309fa442279" (UID: "ad98d92d-f119-4120-a3ca-e309fa442279"). InnerVolumeSpecName "kube-api-access-b5v5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.146963 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6wsh\" (UniqueName: \"kubernetes.io/projected/edce7589-9918-41f7-9d90-c5463388138f-kube-api-access-g6wsh\") pod \"edce7589-9918-41f7-9d90-c5463388138f\" (UID: \"edce7589-9918-41f7-9d90-c5463388138f\") " Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.147230 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f4wb\" (UniqueName: \"kubernetes.io/projected/806dfe15-9b99-4889-9bfe-4202e609e41a-kube-api-access-6f4wb\") pod \"806dfe15-9b99-4889-9bfe-4202e609e41a\" (UID: \"806dfe15-9b99-4889-9bfe-4202e609e41a\") " Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.147964 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5v5w\" (UniqueName: \"kubernetes.io/projected/ad98d92d-f119-4120-a3ca-e309fa442279-kube-api-access-b5v5w\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.152048 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/806dfe15-9b99-4889-9bfe-4202e609e41a-kube-api-access-6f4wb" (OuterVolumeSpecName: "kube-api-access-6f4wb") pod "806dfe15-9b99-4889-9bfe-4202e609e41a" (UID: "806dfe15-9b99-4889-9bfe-4202e609e41a"). InnerVolumeSpecName "kube-api-access-6f4wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.152782 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edce7589-9918-41f7-9d90-c5463388138f-kube-api-access-g6wsh" (OuterVolumeSpecName: "kube-api-access-g6wsh") pod "edce7589-9918-41f7-9d90-c5463388138f" (UID: "edce7589-9918-41f7-9d90-c5463388138f"). InnerVolumeSpecName "kube-api-access-g6wsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.249779 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f4wb\" (UniqueName: \"kubernetes.io/projected/806dfe15-9b99-4889-9bfe-4202e609e41a-kube-api-access-6f4wb\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.249823 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6wsh\" (UniqueName: \"kubernetes.io/projected/edce7589-9918-41f7-9d90-c5463388138f-kube-api-access-g6wsh\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.562424 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-stbxp" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.562933 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-stbxp" event={"ID":"edce7589-9918-41f7-9d90-c5463388138f","Type":"ContainerDied","Data":"ed4af77c5385be22ccb0c4980c2715ebc6b0c26c325d63ef5d953a82ef1395e9"} Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.562963 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed4af77c5385be22ccb0c4980c2715ebc6b0c26c325d63ef5d953a82ef1395e9" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.566322 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l46qt" event={"ID":"f144c400-3cd4-4933-a2c6-ab57e96bb6d9","Type":"ContainerStarted","Data":"6c2f299e3422f43f9175be4cff4c4839b14702487e78bf087eace274c9e42cd4"} Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.568540 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qb9xs" event={"ID":"806dfe15-9b99-4889-9bfe-4202e609e41a","Type":"ContainerDied","Data":"76913e8b3ce6add1ae976fbe1bf99a617b85a1cc04c4cca2023803c1c4dc77a5"} Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.568770 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76913e8b3ce6add1ae976fbe1bf99a617b85a1cc04c4cca2023803c1c4dc77a5" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.568553 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qb9xs" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.570737 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vqjw4" event={"ID":"ad98d92d-f119-4120-a3ca-e309fa442279","Type":"ContainerDied","Data":"c9c488a9941f3f5642b7917b85cac572f5b96c89d7dfafead4b03984fc629208"} Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.570767 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c488a9941f3f5642b7917b85cac572f5b96c89d7dfafead4b03984fc629208" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.570809 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vqjw4" Oct 04 05:06:09 crc kubenswrapper[4802]: I1004 05:06:09.592576 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-l46qt" podStartSLOduration=4.7657763410000005 podStartE2EDuration="8.592550144s" podCreationTimestamp="2025-10-04 05:06:01 +0000 UTC" firstStartedPulling="2025-10-04 05:06:05.009507628 +0000 UTC m=+1207.417508253" lastFinishedPulling="2025-10-04 05:06:08.836281401 +0000 UTC m=+1211.244282056" observedRunningTime="2025-10-04 05:06:09.583201358 +0000 UTC m=+1211.991202003" watchObservedRunningTime="2025-10-04 05:06:09.592550144 +0000 UTC m=+1212.000550779" Oct 04 05:06:12 crc kubenswrapper[4802]: I1004 05:06:12.600666 4802 generic.go:334] "Generic (PLEG): container finished" podID="f144c400-3cd4-4933-a2c6-ab57e96bb6d9" containerID="6c2f299e3422f43f9175be4cff4c4839b14702487e78bf087eace274c9e42cd4" exitCode=0 Oct 04 05:06:12 crc kubenswrapper[4802]: I1004 05:06:12.600770 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l46qt" event={"ID":"f144c400-3cd4-4933-a2c6-ab57e96bb6d9","Type":"ContainerDied","Data":"6c2f299e3422f43f9175be4cff4c4839b14702487e78bf087eace274c9e42cd4"} Oct 04 05:06:12 crc kubenswrapper[4802]: I1004 05:06:12.603408 4802 generic.go:334] "Generic (PLEG): container finished" podID="ed629005-3761-4623-99e5-723e05932230" containerID="9686717e6b5c3a23b52989cd4006df86798432aaf61867e64f4aa3c0689052db" exitCode=0 Oct 04 05:06:12 crc kubenswrapper[4802]: I1004 05:06:12.603471 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rn59p" event={"ID":"ed629005-3761-4623-99e5-723e05932230","Type":"ContainerDied","Data":"9686717e6b5c3a23b52989cd4006df86798432aaf61867e64f4aa3c0689052db"} Oct 04 05:06:13 crc kubenswrapper[4802]: I1004 05:06:13.908593 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l46qt" Oct 04 05:06:13 crc kubenswrapper[4802]: I1004 05:06:13.993002 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rn59p" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.028821 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-config-data\") pod \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\" (UID: \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\") " Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.029039 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-combined-ca-bundle\") pod \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\" (UID: \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\") " Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.029083 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkk8v\" (UniqueName: \"kubernetes.io/projected/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-kube-api-access-vkk8v\") pod \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\" (UID: \"f144c400-3cd4-4933-a2c6-ab57e96bb6d9\") " Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.034801 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-kube-api-access-vkk8v" (OuterVolumeSpecName: "kube-api-access-vkk8v") pod "f144c400-3cd4-4933-a2c6-ab57e96bb6d9" (UID: "f144c400-3cd4-4933-a2c6-ab57e96bb6d9"). InnerVolumeSpecName "kube-api-access-vkk8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.055563 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f144c400-3cd4-4933-a2c6-ab57e96bb6d9" (UID: "f144c400-3cd4-4933-a2c6-ab57e96bb6d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.074837 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-config-data" (OuterVolumeSpecName: "config-data") pod "f144c400-3cd4-4933-a2c6-ab57e96bb6d9" (UID: "f144c400-3cd4-4933-a2c6-ab57e96bb6d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.130240 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7r7n\" (UniqueName: \"kubernetes.io/projected/ed629005-3761-4623-99e5-723e05932230-kube-api-access-x7r7n\") pod \"ed629005-3761-4623-99e5-723e05932230\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.130410 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-config-data\") pod \"ed629005-3761-4623-99e5-723e05932230\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.130436 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-db-sync-config-data\") pod \"ed629005-3761-4623-99e5-723e05932230\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.130494 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-combined-ca-bundle\") pod \"ed629005-3761-4623-99e5-723e05932230\" (UID: \"ed629005-3761-4623-99e5-723e05932230\") " Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.130791 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.130807 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.130820 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkk8v\" (UniqueName: \"kubernetes.io/projected/f144c400-3cd4-4933-a2c6-ab57e96bb6d9-kube-api-access-vkk8v\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.133289 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ed629005-3761-4623-99e5-723e05932230" (UID: "ed629005-3761-4623-99e5-723e05932230"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.134573 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed629005-3761-4623-99e5-723e05932230-kube-api-access-x7r7n" (OuterVolumeSpecName: "kube-api-access-x7r7n") pod "ed629005-3761-4623-99e5-723e05932230" (UID: "ed629005-3761-4623-99e5-723e05932230"). InnerVolumeSpecName "kube-api-access-x7r7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.152710 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed629005-3761-4623-99e5-723e05932230" (UID: "ed629005-3761-4623-99e5-723e05932230"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.175927 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-config-data" (OuterVolumeSpecName: "config-data") pod "ed629005-3761-4623-99e5-723e05932230" (UID: "ed629005-3761-4623-99e5-723e05932230"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.232163 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.232199 4802 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.232210 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed629005-3761-4623-99e5-723e05932230-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.232220 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7r7n\" (UniqueName: \"kubernetes.io/projected/ed629005-3761-4623-99e5-723e05932230-kube-api-access-x7r7n\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.620351 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rn59p" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.620367 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rn59p" event={"ID":"ed629005-3761-4623-99e5-723e05932230","Type":"ContainerDied","Data":"17262f724c7c0274512ff795e67163afe5c61b9990566e44c05099fd3cc3ab88"} Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.621184 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17262f724c7c0274512ff795e67163afe5c61b9990566e44c05099fd3cc3ab88" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.622581 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l46qt" event={"ID":"f144c400-3cd4-4933-a2c6-ab57e96bb6d9","Type":"ContainerDied","Data":"9f72574feabdfd3547f392f51b9f0cb5185a41bf6a09582e7ad2ec6f64e9f1d5"} Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.622627 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f72574feabdfd3547f392f51b9f0cb5185a41bf6a09582e7ad2ec6f64e9f1d5" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.622632 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l46qt" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.938314 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cm69w"] Oct 04 05:06:14 crc kubenswrapper[4802]: E1004 05:06:14.947804 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f144c400-3cd4-4933-a2c6-ab57e96bb6d9" containerName="keystone-db-sync" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.947849 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f144c400-3cd4-4933-a2c6-ab57e96bb6d9" containerName="keystone-db-sync" Oct 04 05:06:14 crc kubenswrapper[4802]: E1004 05:06:14.947887 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed629005-3761-4623-99e5-723e05932230" containerName="glance-db-sync" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.947895 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed629005-3761-4623-99e5-723e05932230" containerName="glance-db-sync" Oct 04 05:06:14 crc kubenswrapper[4802]: E1004 05:06:14.947915 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806dfe15-9b99-4889-9bfe-4202e609e41a" containerName="mariadb-database-create" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.947923 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="806dfe15-9b99-4889-9bfe-4202e609e41a" containerName="mariadb-database-create" Oct 04 05:06:14 crc kubenswrapper[4802]: E1004 05:06:14.947940 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edce7589-9918-41f7-9d90-c5463388138f" containerName="mariadb-database-create" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.947948 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="edce7589-9918-41f7-9d90-c5463388138f" containerName="mariadb-database-create" Oct 04 05:06:14 crc kubenswrapper[4802]: E1004 05:06:14.947960 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad98d92d-f119-4120-a3ca-e309fa442279" containerName="mariadb-database-create" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.947967 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad98d92d-f119-4120-a3ca-e309fa442279" containerName="mariadb-database-create" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.948268 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="edce7589-9918-41f7-9d90-c5463388138f" containerName="mariadb-database-create" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.948284 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed629005-3761-4623-99e5-723e05932230" containerName="glance-db-sync" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.948300 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="806dfe15-9b99-4889-9bfe-4202e609e41a" containerName="mariadb-database-create" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.948310 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad98d92d-f119-4120-a3ca-e309fa442279" containerName="mariadb-database-create" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.948323 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="f144c400-3cd4-4933-a2c6-ab57e96bb6d9" containerName="keystone-db-sync" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.949140 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.956383 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cm69w"] Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.957231 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.957290 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.957435 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.957625 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fxp4p" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.982563 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-vm254"] Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.984043 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:14 crc kubenswrapper[4802]: I1004 05:06:14.997003 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-vm254"] Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.058488 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-fernet-keys\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.058590 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-scripts\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.058669 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-credential-keys\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.058711 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-combined-ca-bundle\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.058762 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k7wz\" (UniqueName: \"kubernetes.io/projected/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-kube-api-access-5k7wz\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.058902 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-config-data\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.153447 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.156367 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.160845 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.160907 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.162357 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-fernet-keys\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.170723 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-scripts\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.171081 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f92d045a-efa3-4087-a8f6-940f4446c663-log-httpd\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.171190 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f92d045a-efa3-4087-a8f6-940f4446c663-run-httpd\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.171259 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-scripts\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.171301 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.171331 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-credential-keys\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.171385 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.171432 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.171459 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-combined-ca-bundle\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.171869 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-config-data\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.171921 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-config\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.171975 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbtsw\" (UniqueName: \"kubernetes.io/projected/f92d045a-efa3-4087-a8f6-940f4446c663-kube-api-access-qbtsw\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.172047 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k7wz\" (UniqueName: \"kubernetes.io/projected/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-kube-api-access-5k7wz\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.172081 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.172119 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6hlk\" (UniqueName: \"kubernetes.io/projected/226a8579-6d84-456c-961f-087441faa92f-kube-api-access-k6hlk\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.172166 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-config-data\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.172198 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.179763 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-config-data\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.180718 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-credential-keys\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.187275 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-fernet-keys\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.187483 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-scripts\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.187887 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-combined-ca-bundle\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.209600 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.222356 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k7wz\" (UniqueName: \"kubernetes.io/projected/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-kube-api-access-5k7wz\") pod \"keystone-bootstrap-cm69w\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.248749 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-vm254"] Oct 04 05:06:15 crc kubenswrapper[4802]: E1004 05:06:15.249257 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-k6hlk ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-75bb4695fc-vm254" podUID="226a8579-6d84-456c-961f-087441faa92f" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.260695 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nfznr"] Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.272410 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.286124 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f92d045a-efa3-4087-a8f6-940f4446c663-run-httpd\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.286164 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-scripts\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.286203 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.286256 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.286296 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.286318 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-config-data\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.286344 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-config\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.286385 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbtsw\" (UniqueName: \"kubernetes.io/projected/f92d045a-efa3-4087-a8f6-940f4446c663-kube-api-access-qbtsw\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.286421 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.286446 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6hlk\" (UniqueName: \"kubernetes.io/projected/226a8579-6d84-456c-961f-087441faa92f-kube-api-access-k6hlk\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.286492 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.286620 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f92d045a-efa3-4087-a8f6-940f4446c663-log-httpd\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.287036 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f92d045a-efa3-4087-a8f6-940f4446c663-log-httpd\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.287240 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f92d045a-efa3-4087-a8f6-940f4446c663-run-httpd\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.297603 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-scripts\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.298197 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.298344 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-config\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.298701 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.302241 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.305312 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.305826 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.307495 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.330324 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-config-data\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.344334 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbtsw\" (UniqueName: \"kubernetes.io/projected/f92d045a-efa3-4087-a8f6-940f4446c663-kube-api-access-qbtsw\") pod \"ceilometer-0\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.356058 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nfznr"] Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.360426 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6hlk\" (UniqueName: \"kubernetes.io/projected/226a8579-6d84-456c-961f-087441faa92f-kube-api-access-k6hlk\") pod \"dnsmasq-dns-75bb4695fc-vm254\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.363690 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-jk5tx"] Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.374100 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.376433 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.376677 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.376804 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-55hqk" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.388925 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.389020 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-combined-ca-bundle\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.389055 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-dns-svc\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.389104 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5a68aa1-61b6-4151-b77b-8b107570d0e6-logs\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.389137 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcspf\" (UniqueName: \"kubernetes.io/projected/f5a68aa1-61b6-4151-b77b-8b107570d0e6-kube-api-access-dcspf\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.389157 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.389176 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-scripts\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.389223 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-config-data\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.389247 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-config\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.389293 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-694kg\" (UniqueName: \"kubernetes.io/projected/dd6623ce-dbff-4c97-a029-dee0ffe606f0-kube-api-access-694kg\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.401351 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jk5tx"] Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.430426 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nfznr"] Oct 04 05:06:15 crc kubenswrapper[4802]: E1004 05:06:15.431164 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-694kg ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6546db6db7-nfznr" podUID="dd6623ce-dbff-4c97-a029-dee0ffe606f0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.439000 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-n75ls"] Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.440399 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.474344 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-n75ls"] Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.493597 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcspf\" (UniqueName: \"kubernetes.io/projected/f5a68aa1-61b6-4151-b77b-8b107570d0e6-kube-api-access-dcspf\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.493654 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.493670 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-scripts\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.493703 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-config-data\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.493727 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrx9w\" (UniqueName: \"kubernetes.io/projected/85b6b87c-0d6a-4bd3-af19-94f09748a665-kube-api-access-lrx9w\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.493748 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-config\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.493779 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-694kg\" (UniqueName: \"kubernetes.io/projected/dd6623ce-dbff-4c97-a029-dee0ffe606f0-kube-api-access-694kg\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.493820 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.493879 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-combined-ca-bundle\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.493910 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-dns-svc\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.493944 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.493973 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.494001 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.494028 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-config\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.494051 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5a68aa1-61b6-4151-b77b-8b107570d0e6-logs\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.495365 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.495953 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5a68aa1-61b6-4151-b77b-8b107570d0e6-logs\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.496834 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.503181 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-scripts\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.503198 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-combined-ca-bundle\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.503929 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-dns-svc\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.504630 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-config\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.507246 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-config-data\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.516771 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcspf\" (UniqueName: \"kubernetes.io/projected/f5a68aa1-61b6-4151-b77b-8b107570d0e6-kube-api-access-dcspf\") pod \"placement-db-sync-jk5tx\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.524821 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-694kg\" (UniqueName: \"kubernetes.io/projected/dd6623ce-dbff-4c97-a029-dee0ffe606f0-kube-api-access-694kg\") pod \"dnsmasq-dns-6546db6db7-nfznr\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.555509 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.601563 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.601626 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.601685 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.601718 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-config\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.601826 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrx9w\" (UniqueName: \"kubernetes.io/projected/85b6b87c-0d6a-4bd3-af19-94f09748a665-kube-api-access-lrx9w\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.603771 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.604037 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.604362 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-config\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.611409 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.623280 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrx9w\" (UniqueName: \"kubernetes.io/projected/85b6b87c-0d6a-4bd3-af19-94f09748a665-kube-api-access-lrx9w\") pod \"dnsmasq-dns-7987f74bbc-n75ls\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.633274 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.633353 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.643559 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.649783 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.701232 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.702899 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-694kg\" (UniqueName: \"kubernetes.io/projected/dd6623ce-dbff-4c97-a029-dee0ffe606f0-kube-api-access-694kg\") pod \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.703033 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-ovsdbserver-nb\") pod \"226a8579-6d84-456c-961f-087441faa92f\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.703092 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-config\") pod \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.703147 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-ovsdbserver-sb\") pod \"226a8579-6d84-456c-961f-087441faa92f\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.703180 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-ovsdbserver-sb\") pod \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.703223 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-dns-svc\") pod \"226a8579-6d84-456c-961f-087441faa92f\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.703245 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-dns-svc\") pod \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.703898 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dd6623ce-dbff-4c97-a029-dee0ffe606f0" (UID: "dd6623ce-dbff-4c97-a029-dee0ffe606f0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.703880 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "226a8579-6d84-456c-961f-087441faa92f" (UID: "226a8579-6d84-456c-961f-087441faa92f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.703919 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "226a8579-6d84-456c-961f-087441faa92f" (UID: "226a8579-6d84-456c-961f-087441faa92f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.703906 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "226a8579-6d84-456c-961f-087441faa92f" (UID: "226a8579-6d84-456c-961f-087441faa92f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.703909 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-config" (OuterVolumeSpecName: "config") pod "dd6623ce-dbff-4c97-a029-dee0ffe606f0" (UID: "dd6623ce-dbff-4c97-a029-dee0ffe606f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.705051 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd6623ce-dbff-4c97-a029-dee0ffe606f0" (UID: "dd6623ce-dbff-4c97-a029-dee0ffe606f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.705118 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6hlk\" (UniqueName: \"kubernetes.io/projected/226a8579-6d84-456c-961f-087441faa92f-kube-api-access-k6hlk\") pod \"226a8579-6d84-456c-961f-087441faa92f\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.705444 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-ovsdbserver-nb\") pod \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\" (UID: \"dd6623ce-dbff-4c97-a029-dee0ffe606f0\") " Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.705614 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dd6623ce-dbff-4c97-a029-dee0ffe606f0" (UID: "dd6623ce-dbff-4c97-a029-dee0ffe606f0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.705637 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-config\") pod \"226a8579-6d84-456c-961f-087441faa92f\" (UID: \"226a8579-6d84-456c-961f-087441faa92f\") " Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.705986 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-config" (OuterVolumeSpecName: "config") pod "226a8579-6d84-456c-961f-087441faa92f" (UID: "226a8579-6d84-456c-961f-087441faa92f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.706038 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.706050 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.706059 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.706067 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.706075 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.706083 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6623ce-dbff-4c97-a029-dee0ffe606f0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.706091 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.708825 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226a8579-6d84-456c-961f-087441faa92f-kube-api-access-k6hlk" (OuterVolumeSpecName: "kube-api-access-k6hlk") pod "226a8579-6d84-456c-961f-087441faa92f" (UID: "226a8579-6d84-456c-961f-087441faa92f"). InnerVolumeSpecName "kube-api-access-k6hlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.708864 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6623ce-dbff-4c97-a029-dee0ffe606f0-kube-api-access-694kg" (OuterVolumeSpecName: "kube-api-access-694kg") pod "dd6623ce-dbff-4c97-a029-dee0ffe606f0" (UID: "dd6623ce-dbff-4c97-a029-dee0ffe606f0"). InnerVolumeSpecName "kube-api-access-694kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.774029 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.808040 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6hlk\" (UniqueName: \"kubernetes.io/projected/226a8579-6d84-456c-961f-087441faa92f-kube-api-access-k6hlk\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.808064 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226a8579-6d84-456c-961f-087441faa92f-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.808073 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-694kg\" (UniqueName: \"kubernetes.io/projected/dd6623ce-dbff-4c97-a029-dee0ffe606f0-kube-api-access-694kg\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.895001 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cm69w"] Oct 04 05:06:15 crc kubenswrapper[4802]: W1004 05:06:15.904907 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfbd6a6d_e6ad_4110_9dc7_f10efc07f07c.slice/crio-a13e049e8e3cb3cd6ae3316e0c37d8fce7f583df56e2fcda926e56bc8afb45cf WatchSource:0}: Error finding container a13e049e8e3cb3cd6ae3316e0c37d8fce7f583df56e2fcda926e56bc8afb45cf: Status 404 returned error can't find the container with id a13e049e8e3cb3cd6ae3316e0c37d8fce7f583df56e2fcda926e56bc8afb45cf Oct 04 05:06:15 crc kubenswrapper[4802]: I1004 05:06:15.972275 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jk5tx"] Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.025368 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:06:16 crc kubenswrapper[4802]: W1004 05:06:16.034343 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf92d045a_efa3_4087_a8f6_940f4446c663.slice/crio-fd234756a97ec6198eba765dfdccd263238d62372970fc1aa4d2526a6540d2c8 WatchSource:0}: Error finding container fd234756a97ec6198eba765dfdccd263238d62372970fc1aa4d2526a6540d2c8: Status 404 returned error can't find the container with id fd234756a97ec6198eba765dfdccd263238d62372970fc1aa4d2526a6540d2c8 Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.268770 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-n75ls"] Oct 04 05:06:16 crc kubenswrapper[4802]: W1004 05:06:16.270158 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85b6b87c_0d6a_4bd3_af19_94f09748a665.slice/crio-66b7e17ad91e5199f04d5db18b8aeca2a8c32092a717de23185f14876571f740 WatchSource:0}: Error finding container 66b7e17ad91e5199f04d5db18b8aeca2a8c32092a717de23185f14876571f740: Status 404 returned error can't find the container with id 66b7e17ad91e5199f04d5db18b8aeca2a8c32092a717de23185f14876571f740 Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.642739 4802 generic.go:334] "Generic (PLEG): container finished" podID="85b6b87c-0d6a-4bd3-af19-94f09748a665" containerID="ad161aae239278648b0613e04bdaa25adc2804fdb4d27d663abc795a8dbc44e6" exitCode=0 Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.643077 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" event={"ID":"85b6b87c-0d6a-4bd3-af19-94f09748a665","Type":"ContainerDied","Data":"ad161aae239278648b0613e04bdaa25adc2804fdb4d27d663abc795a8dbc44e6"} Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.643103 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" event={"ID":"85b6b87c-0d6a-4bd3-af19-94f09748a665","Type":"ContainerStarted","Data":"66b7e17ad91e5199f04d5db18b8aeca2a8c32092a717de23185f14876571f740"} Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.649152 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jk5tx" event={"ID":"f5a68aa1-61b6-4151-b77b-8b107570d0e6","Type":"ContainerStarted","Data":"9ae416937da8a8eefd7be981d283b467517a7aaa6b0b1e7e33532c82220c5aad"} Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.652634 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cm69w" event={"ID":"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c","Type":"ContainerStarted","Data":"1201e702e79a5e472fcd74e8b0c30e18570bb160af864fb399a2f90de454d671"} Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.652744 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cm69w" event={"ID":"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c","Type":"ContainerStarted","Data":"a13e049e8e3cb3cd6ae3316e0c37d8fce7f583df56e2fcda926e56bc8afb45cf"} Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.655462 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-nfznr" Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.656113 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f92d045a-efa3-4087-a8f6-940f4446c663","Type":"ContainerStarted","Data":"fd234756a97ec6198eba765dfdccd263238d62372970fc1aa4d2526a6540d2c8"} Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.656161 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-vm254" Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.720320 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-vm254"] Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.727151 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-vm254"] Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.732513 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cm69w" podStartSLOduration=2.732496401 podStartE2EDuration="2.732496401s" podCreationTimestamp="2025-10-04 05:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:06:16.718085482 +0000 UTC m=+1219.126086117" watchObservedRunningTime="2025-10-04 05:06:16.732496401 +0000 UTC m=+1219.140497016" Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.759793 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nfznr"] Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.768908 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nfznr"] Oct 04 05:06:16 crc kubenswrapper[4802]: I1004 05:06:16.892244 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.665196 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" event={"ID":"85b6b87c-0d6a-4bd3-af19-94f09748a665","Type":"ContainerStarted","Data":"ae46130066190aeed06bc56a99e237b3e545d1317fa61def3da43e2ac3afcdc4"} Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.698571 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" podStartSLOduration=2.698533673 podStartE2EDuration="2.698533673s" podCreationTimestamp="2025-10-04 05:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:06:17.688454487 +0000 UTC m=+1220.096455122" watchObservedRunningTime="2025-10-04 05:06:17.698533673 +0000 UTC m=+1220.106534308" Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.724936 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c331-account-create-9dbks"] Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.727271 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c331-account-create-9dbks" Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.738206 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.748696 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c331-account-create-9dbks"] Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.835766 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d697-account-create-fm82l"] Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.836934 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d697-account-create-fm82l" Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.845272 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.860599 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d697-account-create-fm82l"] Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.863810 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8hr7\" (UniqueName: \"kubernetes.io/projected/1d077b7d-471d-4f5c-a970-0c8d775643dc-kube-api-access-h8hr7\") pod \"cinder-c331-account-create-9dbks\" (UID: \"1d077b7d-471d-4f5c-a970-0c8d775643dc\") " pod="openstack/cinder-c331-account-create-9dbks" Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.965195 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8hr7\" (UniqueName: \"kubernetes.io/projected/1d077b7d-471d-4f5c-a970-0c8d775643dc-kube-api-access-h8hr7\") pod \"cinder-c331-account-create-9dbks\" (UID: \"1d077b7d-471d-4f5c-a970-0c8d775643dc\") " pod="openstack/cinder-c331-account-create-9dbks" Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.965328 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nxf6\" (UniqueName: \"kubernetes.io/projected/e1f0f5f8-50f9-4c45-9223-7c43bd900627-kube-api-access-9nxf6\") pod \"barbican-d697-account-create-fm82l\" (UID: \"e1f0f5f8-50f9-4c45-9223-7c43bd900627\") " pod="openstack/barbican-d697-account-create-fm82l" Oct 04 05:06:17 crc kubenswrapper[4802]: I1004 05:06:17.991157 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8hr7\" (UniqueName: \"kubernetes.io/projected/1d077b7d-471d-4f5c-a970-0c8d775643dc-kube-api-access-h8hr7\") pod \"cinder-c331-account-create-9dbks\" (UID: \"1d077b7d-471d-4f5c-a970-0c8d775643dc\") " pod="openstack/cinder-c331-account-create-9dbks" Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.066858 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nxf6\" (UniqueName: \"kubernetes.io/projected/e1f0f5f8-50f9-4c45-9223-7c43bd900627-kube-api-access-9nxf6\") pod \"barbican-d697-account-create-fm82l\" (UID: \"e1f0f5f8-50f9-4c45-9223-7c43bd900627\") " pod="openstack/barbican-d697-account-create-fm82l" Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.083053 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c331-account-create-9dbks" Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.085478 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nxf6\" (UniqueName: \"kubernetes.io/projected/e1f0f5f8-50f9-4c45-9223-7c43bd900627-kube-api-access-9nxf6\") pod \"barbican-d697-account-create-fm82l\" (UID: \"e1f0f5f8-50f9-4c45-9223-7c43bd900627\") " pod="openstack/barbican-d697-account-create-fm82l" Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.105208 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-93e5-account-create-bq7mn"] Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.106774 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-93e5-account-create-bq7mn" Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.109960 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.127936 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-93e5-account-create-bq7mn"] Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.157873 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d697-account-create-fm82l" Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.168073 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktgwj\" (UniqueName: \"kubernetes.io/projected/21b0ce29-901c-4bdb-a6ca-a5dfb3987559-kube-api-access-ktgwj\") pod \"neutron-93e5-account-create-bq7mn\" (UID: \"21b0ce29-901c-4bdb-a6ca-a5dfb3987559\") " pod="openstack/neutron-93e5-account-create-bq7mn" Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.270000 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktgwj\" (UniqueName: \"kubernetes.io/projected/21b0ce29-901c-4bdb-a6ca-a5dfb3987559-kube-api-access-ktgwj\") pod \"neutron-93e5-account-create-bq7mn\" (UID: \"21b0ce29-901c-4bdb-a6ca-a5dfb3987559\") " pod="openstack/neutron-93e5-account-create-bq7mn" Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.287432 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktgwj\" (UniqueName: \"kubernetes.io/projected/21b0ce29-901c-4bdb-a6ca-a5dfb3987559-kube-api-access-ktgwj\") pod \"neutron-93e5-account-create-bq7mn\" (UID: \"21b0ce29-901c-4bdb-a6ca-a5dfb3987559\") " pod="openstack/neutron-93e5-account-create-bq7mn" Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.369966 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226a8579-6d84-456c-961f-087441faa92f" path="/var/lib/kubelet/pods/226a8579-6d84-456c-961f-087441faa92f/volumes" Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.370732 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6623ce-dbff-4c97-a029-dee0ffe606f0" path="/var/lib/kubelet/pods/dd6623ce-dbff-4c97-a029-dee0ffe606f0/volumes" Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.458420 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-93e5-account-create-bq7mn" Oct 04 05:06:18 crc kubenswrapper[4802]: I1004 05:06:18.680662 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:21 crc kubenswrapper[4802]: I1004 05:06:21.479560 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-93e5-account-create-bq7mn"] Oct 04 05:06:21 crc kubenswrapper[4802]: I1004 05:06:21.531258 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c331-account-create-9dbks"] Oct 04 05:06:21 crc kubenswrapper[4802]: I1004 05:06:21.592841 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d697-account-create-fm82l"] Oct 04 05:06:22 crc kubenswrapper[4802]: I1004 05:06:22.723111 4802 generic.go:334] "Generic (PLEG): container finished" podID="cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c" containerID="1201e702e79a5e472fcd74e8b0c30e18570bb160af864fb399a2f90de454d671" exitCode=0 Oct 04 05:06:22 crc kubenswrapper[4802]: I1004 05:06:22.723197 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cm69w" event={"ID":"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c","Type":"ContainerDied","Data":"1201e702e79a5e472fcd74e8b0c30e18570bb160af864fb399a2f90de454d671"} Oct 04 05:06:24 crc kubenswrapper[4802]: I1004 05:06:24.777936 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-93e5-account-create-bq7mn" event={"ID":"21b0ce29-901c-4bdb-a6ca-a5dfb3987559","Type":"ContainerStarted","Data":"44b1f1659c8de033f8a4b17b886de1e3121da4b96e7e175199755bf8ce964ec3"} Oct 04 05:06:24 crc kubenswrapper[4802]: I1004 05:06:24.787881 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d697-account-create-fm82l" event={"ID":"e1f0f5f8-50f9-4c45-9223-7c43bd900627","Type":"ContainerStarted","Data":"559941ce42d0bdeba609a0f4df33cdd311f4721cd45449d3433ab81ed4835c79"} Oct 04 05:06:24 crc kubenswrapper[4802]: I1004 05:06:24.789357 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c331-account-create-9dbks" event={"ID":"1d077b7d-471d-4f5c-a970-0c8d775643dc","Type":"ContainerStarted","Data":"3459492610f95104ce6ade5ff55949e9cb8a0a913be9f1bb7fec22d00230a15b"} Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.653830 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.695289 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-combined-ca-bundle\") pod \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.696746 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-scripts\") pod \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.696799 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k7wz\" (UniqueName: \"kubernetes.io/projected/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-kube-api-access-5k7wz\") pod \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.696987 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-credential-keys\") pod \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.697070 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-fernet-keys\") pod \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.697108 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-config-data\") pod \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\" (UID: \"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c\") " Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.703334 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-kube-api-access-5k7wz" (OuterVolumeSpecName: "kube-api-access-5k7wz") pod "cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c" (UID: "cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c"). InnerVolumeSpecName "kube-api-access-5k7wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.704997 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c" (UID: "cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.709013 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c" (UID: "cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.709531 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-scripts" (OuterVolumeSpecName: "scripts") pod "cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c" (UID: "cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.728378 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c" (UID: "cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.739249 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-config-data" (OuterVolumeSpecName: "config-data") pod "cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c" (UID: "cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.775354 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.798850 4802 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.798894 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.798906 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.798919 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.798931 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k7wz\" (UniqueName: \"kubernetes.io/projected/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-kube-api-access-5k7wz\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.798942 4802 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.821773 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cm69w" event={"ID":"cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c","Type":"ContainerDied","Data":"a13e049e8e3cb3cd6ae3316e0c37d8fce7f583df56e2fcda926e56bc8afb45cf"} Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.821819 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a13e049e8e3cb3cd6ae3316e0c37d8fce7f583df56e2fcda926e56bc8afb45cf" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.821886 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cm69w" Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.826164 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gkmb5"] Oct 04 05:06:25 crc kubenswrapper[4802]: I1004 05:06:25.827182 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" podUID="96bc52e4-8853-4e2e-9246-17b6644e096b" containerName="dnsmasq-dns" containerID="cri-o://09a70596aac7d60dd1bfd9809e6250c5e21f0559c0b925650025d52947f19b19" gracePeriod=10 Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.764066 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cm69w"] Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.773116 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cm69w"] Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.782266 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.814859 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9z6h\" (UniqueName: \"kubernetes.io/projected/96bc52e4-8853-4e2e-9246-17b6644e096b-kube-api-access-j9z6h\") pod \"96bc52e4-8853-4e2e-9246-17b6644e096b\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.814954 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-ovsdbserver-sb\") pod \"96bc52e4-8853-4e2e-9246-17b6644e096b\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.815073 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-config\") pod \"96bc52e4-8853-4e2e-9246-17b6644e096b\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.815142 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-dns-svc\") pod \"96bc52e4-8853-4e2e-9246-17b6644e096b\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.815179 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-ovsdbserver-nb\") pod \"96bc52e4-8853-4e2e-9246-17b6644e096b\" (UID: \"96bc52e4-8853-4e2e-9246-17b6644e096b\") " Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.833845 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96bc52e4-8853-4e2e-9246-17b6644e096b-kube-api-access-j9z6h" (OuterVolumeSpecName: "kube-api-access-j9z6h") pod "96bc52e4-8853-4e2e-9246-17b6644e096b" (UID: "96bc52e4-8853-4e2e-9246-17b6644e096b"). InnerVolumeSpecName "kube-api-access-j9z6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.863605 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ghjl7"] Oct 04 05:06:26 crc kubenswrapper[4802]: E1004 05:06:26.864053 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96bc52e4-8853-4e2e-9246-17b6644e096b" containerName="init" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.864076 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="96bc52e4-8853-4e2e-9246-17b6644e096b" containerName="init" Oct 04 05:06:26 crc kubenswrapper[4802]: E1004 05:06:26.864096 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96bc52e4-8853-4e2e-9246-17b6644e096b" containerName="dnsmasq-dns" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.864106 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="96bc52e4-8853-4e2e-9246-17b6644e096b" containerName="dnsmasq-dns" Oct 04 05:06:26 crc kubenswrapper[4802]: E1004 05:06:26.864124 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c" containerName="keystone-bootstrap" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.864132 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c" containerName="keystone-bootstrap" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.866922 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c" containerName="keystone-bootstrap" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.866975 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="96bc52e4-8853-4e2e-9246-17b6644e096b" containerName="dnsmasq-dns" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.867927 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.868935 4802 generic.go:334] "Generic (PLEG): container finished" podID="21b0ce29-901c-4bdb-a6ca-a5dfb3987559" containerID="cca958026b08c6167a99d0867a1fc5539ceba2ce0a547ae8249131d20069ef45" exitCode=0 Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.869075 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-93e5-account-create-bq7mn" event={"ID":"21b0ce29-901c-4bdb-a6ca-a5dfb3987559","Type":"ContainerDied","Data":"cca958026b08c6167a99d0867a1fc5539ceba2ce0a547ae8249131d20069ef45"} Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.878225 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.878426 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.878546 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fxp4p" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.878680 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.880566 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-config" (OuterVolumeSpecName: "config") pod "96bc52e4-8853-4e2e-9246-17b6644e096b" (UID: "96bc52e4-8853-4e2e-9246-17b6644e096b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.881873 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ghjl7"] Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.892670 4802 generic.go:334] "Generic (PLEG): container finished" podID="96bc52e4-8853-4e2e-9246-17b6644e096b" containerID="09a70596aac7d60dd1bfd9809e6250c5e21f0559c0b925650025d52947f19b19" exitCode=0 Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.892747 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" event={"ID":"96bc52e4-8853-4e2e-9246-17b6644e096b","Type":"ContainerDied","Data":"09a70596aac7d60dd1bfd9809e6250c5e21f0559c0b925650025d52947f19b19"} Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.892772 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" event={"ID":"96bc52e4-8853-4e2e-9246-17b6644e096b","Type":"ContainerDied","Data":"8ff4fe503d5cb87a1d008bd2fed2343433d9ea24fb99020f9ee7cb86cfee7905"} Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.892798 4802 scope.go:117] "RemoveContainer" containerID="09a70596aac7d60dd1bfd9809e6250c5e21f0559c0b925650025d52947f19b19" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.892907 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gkmb5" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.896268 4802 generic.go:334] "Generic (PLEG): container finished" podID="e1f0f5f8-50f9-4c45-9223-7c43bd900627" containerID="8be179e1554dbab44843a37fdf90a6b5ab9e2203351a982850da0f1dcc39e89c" exitCode=0 Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.896307 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d697-account-create-fm82l" event={"ID":"e1f0f5f8-50f9-4c45-9223-7c43bd900627","Type":"ContainerDied","Data":"8be179e1554dbab44843a37fdf90a6b5ab9e2203351a982850da0f1dcc39e89c"} Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.909590 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "96bc52e4-8853-4e2e-9246-17b6644e096b" (UID: "96bc52e4-8853-4e2e-9246-17b6644e096b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.910726 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "96bc52e4-8853-4e2e-9246-17b6644e096b" (UID: "96bc52e4-8853-4e2e-9246-17b6644e096b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.916565 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-fernet-keys\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.916625 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-combined-ca-bundle\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.916700 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-config-data\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.916755 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-scripts\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.916787 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-885cl\" (UniqueName: \"kubernetes.io/projected/28fb85c9-6063-44b2-871a-4c39ae649b9c-kube-api-access-885cl\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.916804 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-credential-keys\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.916921 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.916937 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.916945 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9z6h\" (UniqueName: \"kubernetes.io/projected/96bc52e4-8853-4e2e-9246-17b6644e096b-kube-api-access-j9z6h\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.916956 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.922227 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jk5tx" event={"ID":"f5a68aa1-61b6-4151-b77b-8b107570d0e6","Type":"ContainerStarted","Data":"753fbdecf1a003bf9941c7febc9f392a5eff0ecf5e814e75f5dd344530c13edc"} Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.923903 4802 generic.go:334] "Generic (PLEG): container finished" podID="1d077b7d-471d-4f5c-a970-0c8d775643dc" containerID="1103ac549024c5d9daa0e4a78f8e55ed7c1c3ca17a522f9f4cc2302ac85c2a03" exitCode=0 Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.923960 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c331-account-create-9dbks" event={"ID":"1d077b7d-471d-4f5c-a970-0c8d775643dc","Type":"ContainerDied","Data":"1103ac549024c5d9daa0e4a78f8e55ed7c1c3ca17a522f9f4cc2302ac85c2a03"} Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.927003 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "96bc52e4-8853-4e2e-9246-17b6644e096b" (UID: "96bc52e4-8853-4e2e-9246-17b6644e096b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.930517 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f92d045a-efa3-4087-a8f6-940f4446c663","Type":"ContainerStarted","Data":"13e45a000e483f1802485d1ae108e677a33a69015df66902e461afb3f3a6ff1f"} Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.940402 4802 scope.go:117] "RemoveContainer" containerID="849c6c0b5f154ddcc90fc4f58470669c3e40595dce3b5ffb3f4168edf073b9f4" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.963507 4802 scope.go:117] "RemoveContainer" containerID="09a70596aac7d60dd1bfd9809e6250c5e21f0559c0b925650025d52947f19b19" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.963666 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-jk5tx" podStartSLOduration=2.258338468 podStartE2EDuration="11.963634169s" podCreationTimestamp="2025-10-04 05:06:15 +0000 UTC" firstStartedPulling="2025-10-04 05:06:16.024654074 +0000 UTC m=+1218.432654699" lastFinishedPulling="2025-10-04 05:06:25.729949775 +0000 UTC m=+1228.137950400" observedRunningTime="2025-10-04 05:06:26.955212979 +0000 UTC m=+1229.363213604" watchObservedRunningTime="2025-10-04 05:06:26.963634169 +0000 UTC m=+1229.371634794" Oct 04 05:06:26 crc kubenswrapper[4802]: E1004 05:06:26.964015 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09a70596aac7d60dd1bfd9809e6250c5e21f0559c0b925650025d52947f19b19\": container with ID starting with 09a70596aac7d60dd1bfd9809e6250c5e21f0559c0b925650025d52947f19b19 not found: ID does not exist" containerID="09a70596aac7d60dd1bfd9809e6250c5e21f0559c0b925650025d52947f19b19" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.964067 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09a70596aac7d60dd1bfd9809e6250c5e21f0559c0b925650025d52947f19b19"} err="failed to get container status \"09a70596aac7d60dd1bfd9809e6250c5e21f0559c0b925650025d52947f19b19\": rpc error: code = NotFound desc = could not find container \"09a70596aac7d60dd1bfd9809e6250c5e21f0559c0b925650025d52947f19b19\": container with ID starting with 09a70596aac7d60dd1bfd9809e6250c5e21f0559c0b925650025d52947f19b19 not found: ID does not exist" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.964100 4802 scope.go:117] "RemoveContainer" containerID="849c6c0b5f154ddcc90fc4f58470669c3e40595dce3b5ffb3f4168edf073b9f4" Oct 04 05:06:26 crc kubenswrapper[4802]: E1004 05:06:26.964940 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"849c6c0b5f154ddcc90fc4f58470669c3e40595dce3b5ffb3f4168edf073b9f4\": container with ID starting with 849c6c0b5f154ddcc90fc4f58470669c3e40595dce3b5ffb3f4168edf073b9f4 not found: ID does not exist" containerID="849c6c0b5f154ddcc90fc4f58470669c3e40595dce3b5ffb3f4168edf073b9f4" Oct 04 05:06:26 crc kubenswrapper[4802]: I1004 05:06:26.964995 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849c6c0b5f154ddcc90fc4f58470669c3e40595dce3b5ffb3f4168edf073b9f4"} err="failed to get container status \"849c6c0b5f154ddcc90fc4f58470669c3e40595dce3b5ffb3f4168edf073b9f4\": rpc error: code = NotFound desc = could not find container \"849c6c0b5f154ddcc90fc4f58470669c3e40595dce3b5ffb3f4168edf073b9f4\": container with ID starting with 849c6c0b5f154ddcc90fc4f58470669c3e40595dce3b5ffb3f4168edf073b9f4 not found: ID does not exist" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.017901 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-fernet-keys\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.017969 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-combined-ca-bundle\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.018006 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-config-data\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.018035 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-scripts\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.018076 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-885cl\" (UniqueName: \"kubernetes.io/projected/28fb85c9-6063-44b2-871a-4c39ae649b9c-kube-api-access-885cl\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.018092 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-credential-keys\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.018179 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96bc52e4-8853-4e2e-9246-17b6644e096b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.022105 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-scripts\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.022371 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-combined-ca-bundle\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.022823 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-credential-keys\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.023070 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-config-data\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.025362 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-fernet-keys\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.033754 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-885cl\" (UniqueName: \"kubernetes.io/projected/28fb85c9-6063-44b2-871a-4c39ae649b9c-kube-api-access-885cl\") pod \"keystone-bootstrap-ghjl7\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.239101 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.240912 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gkmb5"] Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.246794 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gkmb5"] Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.669603 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ghjl7"] Oct 04 05:06:27 crc kubenswrapper[4802]: W1004 05:06:27.681038 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28fb85c9_6063_44b2_871a_4c39ae649b9c.slice/crio-4a0fd06be72d3fefed93293ebec477fb3ffaf38986752f8acebfc7e82b3aac69 WatchSource:0}: Error finding container 4a0fd06be72d3fefed93293ebec477fb3ffaf38986752f8acebfc7e82b3aac69: Status 404 returned error can't find the container with id 4a0fd06be72d3fefed93293ebec477fb3ffaf38986752f8acebfc7e82b3aac69 Oct 04 05:06:27 crc kubenswrapper[4802]: I1004 05:06:27.939376 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ghjl7" event={"ID":"28fb85c9-6063-44b2-871a-4c39ae649b9c","Type":"ContainerStarted","Data":"4a0fd06be72d3fefed93293ebec477fb3ffaf38986752f8acebfc7e82b3aac69"} Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.384346 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96bc52e4-8853-4e2e-9246-17b6644e096b" path="/var/lib/kubelet/pods/96bc52e4-8853-4e2e-9246-17b6644e096b/volumes" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.384976 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c" path="/var/lib/kubelet/pods/cfbd6a6d-e6ad-4110-9dc7-f10efc07f07c/volumes" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.811174 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-93e5-account-create-bq7mn" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.851695 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d697-account-create-fm82l" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.852565 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c331-account-create-9dbks" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.947747 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c331-account-create-9dbks" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.948076 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c331-account-create-9dbks" event={"ID":"1d077b7d-471d-4f5c-a970-0c8d775643dc","Type":"ContainerDied","Data":"3459492610f95104ce6ade5ff55949e9cb8a0a913be9f1bb7fec22d00230a15b"} Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.948097 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3459492610f95104ce6ade5ff55949e9cb8a0a913be9f1bb7fec22d00230a15b" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.950212 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f92d045a-efa3-4087-a8f6-940f4446c663","Type":"ContainerStarted","Data":"979193da9f8dc0ce7d143cf6deef7aa7512eca9e0c95d24443e17fd2bab30db6"} Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.951878 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-93e5-account-create-bq7mn" event={"ID":"21b0ce29-901c-4bdb-a6ca-a5dfb3987559","Type":"ContainerDied","Data":"44b1f1659c8de033f8a4b17b886de1e3121da4b96e7e175199755bf8ce964ec3"} Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.951923 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b1f1659c8de033f8a4b17b886de1e3121da4b96e7e175199755bf8ce964ec3" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.951895 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-93e5-account-create-bq7mn" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.952844 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d697-account-create-fm82l" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.952845 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d697-account-create-fm82l" event={"ID":"e1f0f5f8-50f9-4c45-9223-7c43bd900627","Type":"ContainerDied","Data":"559941ce42d0bdeba609a0f4df33cdd311f4721cd45449d3433ab81ed4835c79"} Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.952947 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="559941ce42d0bdeba609a0f4df33cdd311f4721cd45449d3433ab81ed4835c79" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.953810 4802 generic.go:334] "Generic (PLEG): container finished" podID="f5a68aa1-61b6-4151-b77b-8b107570d0e6" containerID="753fbdecf1a003bf9941c7febc9f392a5eff0ecf5e814e75f5dd344530c13edc" exitCode=0 Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.953870 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jk5tx" event={"ID":"f5a68aa1-61b6-4151-b77b-8b107570d0e6","Type":"ContainerDied","Data":"753fbdecf1a003bf9941c7febc9f392a5eff0ecf5e814e75f5dd344530c13edc"} Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.954091 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktgwj\" (UniqueName: \"kubernetes.io/projected/21b0ce29-901c-4bdb-a6ca-a5dfb3987559-kube-api-access-ktgwj\") pod \"21b0ce29-901c-4bdb-a6ca-a5dfb3987559\" (UID: \"21b0ce29-901c-4bdb-a6ca-a5dfb3987559\") " Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.954142 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nxf6\" (UniqueName: \"kubernetes.io/projected/e1f0f5f8-50f9-4c45-9223-7c43bd900627-kube-api-access-9nxf6\") pod \"e1f0f5f8-50f9-4c45-9223-7c43bd900627\" (UID: \"e1f0f5f8-50f9-4c45-9223-7c43bd900627\") " Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.954253 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8hr7\" (UniqueName: \"kubernetes.io/projected/1d077b7d-471d-4f5c-a970-0c8d775643dc-kube-api-access-h8hr7\") pod \"1d077b7d-471d-4f5c-a970-0c8d775643dc\" (UID: \"1d077b7d-471d-4f5c-a970-0c8d775643dc\") " Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.955979 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ghjl7" event={"ID":"28fb85c9-6063-44b2-871a-4c39ae649b9c","Type":"ContainerStarted","Data":"c4caa4c3c80f979a3554ec1096134c180378ff66dc729cdedc3e4d48bf1504f9"} Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.963076 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d077b7d-471d-4f5c-a970-0c8d775643dc-kube-api-access-h8hr7" (OuterVolumeSpecName: "kube-api-access-h8hr7") pod "1d077b7d-471d-4f5c-a970-0c8d775643dc" (UID: "1d077b7d-471d-4f5c-a970-0c8d775643dc"). InnerVolumeSpecName "kube-api-access-h8hr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.963150 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21b0ce29-901c-4bdb-a6ca-a5dfb3987559-kube-api-access-ktgwj" (OuterVolumeSpecName: "kube-api-access-ktgwj") pod "21b0ce29-901c-4bdb-a6ca-a5dfb3987559" (UID: "21b0ce29-901c-4bdb-a6ca-a5dfb3987559"). InnerVolumeSpecName "kube-api-access-ktgwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.963187 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f0f5f8-50f9-4c45-9223-7c43bd900627-kube-api-access-9nxf6" (OuterVolumeSpecName: "kube-api-access-9nxf6") pod "e1f0f5f8-50f9-4c45-9223-7c43bd900627" (UID: "e1f0f5f8-50f9-4c45-9223-7c43bd900627"). InnerVolumeSpecName "kube-api-access-9nxf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:28 crc kubenswrapper[4802]: I1004 05:06:28.992609 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ghjl7" podStartSLOduration=2.992557943 podStartE2EDuration="2.992557943s" podCreationTimestamp="2025-10-04 05:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:06:28.98577449 +0000 UTC m=+1231.393775115" watchObservedRunningTime="2025-10-04 05:06:28.992557943 +0000 UTC m=+1231.400558568" Oct 04 05:06:29 crc kubenswrapper[4802]: I1004 05:06:29.056651 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktgwj\" (UniqueName: \"kubernetes.io/projected/21b0ce29-901c-4bdb-a6ca-a5dfb3987559-kube-api-access-ktgwj\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:29 crc kubenswrapper[4802]: I1004 05:06:29.056830 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nxf6\" (UniqueName: \"kubernetes.io/projected/e1f0f5f8-50f9-4c45-9223-7c43bd900627-kube-api-access-9nxf6\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:29 crc kubenswrapper[4802]: I1004 05:06:29.056902 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8hr7\" (UniqueName: \"kubernetes.io/projected/1d077b7d-471d-4f5c-a970-0c8d775643dc-kube-api-access-h8hr7\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.303163 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.487793 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-config-data\") pod \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.487891 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcspf\" (UniqueName: \"kubernetes.io/projected/f5a68aa1-61b6-4151-b77b-8b107570d0e6-kube-api-access-dcspf\") pod \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.489835 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5a68aa1-61b6-4151-b77b-8b107570d0e6-logs\") pod \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.489892 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-combined-ca-bundle\") pod \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.489917 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-scripts\") pod \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\" (UID: \"f5a68aa1-61b6-4151-b77b-8b107570d0e6\") " Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.491987 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5a68aa1-61b6-4151-b77b-8b107570d0e6-logs" (OuterVolumeSpecName: "logs") pod "f5a68aa1-61b6-4151-b77b-8b107570d0e6" (UID: "f5a68aa1-61b6-4151-b77b-8b107570d0e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.495868 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5a68aa1-61b6-4151-b77b-8b107570d0e6-kube-api-access-dcspf" (OuterVolumeSpecName: "kube-api-access-dcspf") pod "f5a68aa1-61b6-4151-b77b-8b107570d0e6" (UID: "f5a68aa1-61b6-4151-b77b-8b107570d0e6"). InnerVolumeSpecName "kube-api-access-dcspf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.500591 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-scripts" (OuterVolumeSpecName: "scripts") pod "f5a68aa1-61b6-4151-b77b-8b107570d0e6" (UID: "f5a68aa1-61b6-4151-b77b-8b107570d0e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.523170 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-config-data" (OuterVolumeSpecName: "config-data") pod "f5a68aa1-61b6-4151-b77b-8b107570d0e6" (UID: "f5a68aa1-61b6-4151-b77b-8b107570d0e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.544017 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5a68aa1-61b6-4151-b77b-8b107570d0e6" (UID: "f5a68aa1-61b6-4151-b77b-8b107570d0e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.592574 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.592984 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcspf\" (UniqueName: \"kubernetes.io/projected/f5a68aa1-61b6-4151-b77b-8b107570d0e6-kube-api-access-dcspf\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.592999 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5a68aa1-61b6-4151-b77b-8b107570d0e6-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.593009 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.593017 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5a68aa1-61b6-4151-b77b-8b107570d0e6-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.976446 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jk5tx" event={"ID":"f5a68aa1-61b6-4151-b77b-8b107570d0e6","Type":"ContainerDied","Data":"9ae416937da8a8eefd7be981d283b467517a7aaa6b0b1e7e33532c82220c5aad"} Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.976503 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ae416937da8a8eefd7be981d283b467517a7aaa6b0b1e7e33532c82220c5aad" Oct 04 05:06:30 crc kubenswrapper[4802]: I1004 05:06:30.976591 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jk5tx" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.049275 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-64457d9b48-l5jfj"] Oct 04 05:06:31 crc kubenswrapper[4802]: E1004 05:06:31.049587 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f0f5f8-50f9-4c45-9223-7c43bd900627" containerName="mariadb-account-create" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.049772 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f0f5f8-50f9-4c45-9223-7c43bd900627" containerName="mariadb-account-create" Oct 04 05:06:31 crc kubenswrapper[4802]: E1004 05:06:31.049789 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b0ce29-901c-4bdb-a6ca-a5dfb3987559" containerName="mariadb-account-create" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.049796 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b0ce29-901c-4bdb-a6ca-a5dfb3987559" containerName="mariadb-account-create" Oct 04 05:06:31 crc kubenswrapper[4802]: E1004 05:06:31.049812 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a68aa1-61b6-4151-b77b-8b107570d0e6" containerName="placement-db-sync" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.049819 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a68aa1-61b6-4151-b77b-8b107570d0e6" containerName="placement-db-sync" Oct 04 05:06:31 crc kubenswrapper[4802]: E1004 05:06:31.049847 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d077b7d-471d-4f5c-a970-0c8d775643dc" containerName="mariadb-account-create" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.049853 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d077b7d-471d-4f5c-a970-0c8d775643dc" containerName="mariadb-account-create" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.050005 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d077b7d-471d-4f5c-a970-0c8d775643dc" containerName="mariadb-account-create" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.050028 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f0f5f8-50f9-4c45-9223-7c43bd900627" containerName="mariadb-account-create" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.050048 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b0ce29-901c-4bdb-a6ca-a5dfb3987559" containerName="mariadb-account-create" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.050063 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5a68aa1-61b6-4151-b77b-8b107570d0e6" containerName="placement-db-sync" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.050923 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.053329 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.053336 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.053837 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.053977 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-55hqk" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.054086 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.076856 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64457d9b48-l5jfj"] Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.202925 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-scripts\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.203156 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-logs\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.203276 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-combined-ca-bundle\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.203375 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x9mg\" (UniqueName: \"kubernetes.io/projected/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-kube-api-access-6x9mg\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.203479 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-public-tls-certs\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.203582 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-internal-tls-certs\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.203676 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-config-data\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.305686 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-scripts\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.305737 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-logs\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.305782 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-combined-ca-bundle\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.305805 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x9mg\" (UniqueName: \"kubernetes.io/projected/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-kube-api-access-6x9mg\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.305826 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-public-tls-certs\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.305847 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-internal-tls-certs\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.305865 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-config-data\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.307088 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-logs\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.310747 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-internal-tls-certs\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.310824 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-scripts\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.311165 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-combined-ca-bundle\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.313378 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-config-data\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.322667 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x9mg\" (UniqueName: \"kubernetes.io/projected/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-kube-api-access-6x9mg\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.331265 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb69fc77-66b3-4f06-b438-8fbd159a4c3f-public-tls-certs\") pod \"placement-64457d9b48-l5jfj\" (UID: \"bb69fc77-66b3-4f06-b438-8fbd159a4c3f\") " pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.371303 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.986401 4802 generic.go:334] "Generic (PLEG): container finished" podID="28fb85c9-6063-44b2-871a-4c39ae649b9c" containerID="c4caa4c3c80f979a3554ec1096134c180378ff66dc729cdedc3e4d48bf1504f9" exitCode=0 Oct 04 05:06:31 crc kubenswrapper[4802]: I1004 05:06:31.986789 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ghjl7" event={"ID":"28fb85c9-6063-44b2-871a-4c39ae649b9c","Type":"ContainerDied","Data":"c4caa4c3c80f979a3554ec1096134c180378ff66dc729cdedc3e4d48bf1504f9"} Oct 04 05:06:32 crc kubenswrapper[4802]: I1004 05:06:32.933362 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4mftc"] Oct 04 05:06:32 crc kubenswrapper[4802]: I1004 05:06:32.934622 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:32 crc kubenswrapper[4802]: I1004 05:06:32.936606 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 04 05:06:32 crc kubenswrapper[4802]: I1004 05:06:32.936629 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-w54h4" Oct 04 05:06:32 crc kubenswrapper[4802]: I1004 05:06:32.937314 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 04 05:06:32 crc kubenswrapper[4802]: I1004 05:06:32.946721 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4mftc"] Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.031570 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35c8191-842a-419f-8b4a-6f36bd01f6cd-etc-machine-id\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.031988 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-db-sync-config-data\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.032058 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-combined-ca-bundle\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.032100 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-config-data\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.032161 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-scripts\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.032220 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghdwd\" (UniqueName: \"kubernetes.io/projected/f35c8191-842a-419f-8b4a-6f36bd01f6cd-kube-api-access-ghdwd\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.135519 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-scripts\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.137913 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghdwd\" (UniqueName: \"kubernetes.io/projected/f35c8191-842a-419f-8b4a-6f36bd01f6cd-kube-api-access-ghdwd\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.138364 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35c8191-842a-419f-8b4a-6f36bd01f6cd-etc-machine-id\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.138387 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-db-sync-config-data\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.138464 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-combined-ca-bundle\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.138697 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-config-data\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.139017 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35c8191-842a-419f-8b4a-6f36bd01f6cd-etc-machine-id\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.143981 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-combined-ca-bundle\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.145056 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-scripts\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.145123 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-config-data\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.154142 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-db-sync-config-data\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.174734 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghdwd\" (UniqueName: \"kubernetes.io/projected/f35c8191-842a-419f-8b4a-6f36bd01f6cd-kube-api-access-ghdwd\") pod \"cinder-db-sync-4mftc\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.183335 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-829jj"] Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.184413 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-829jj" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.188192 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-686zn" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.188390 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.204638 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-829jj"] Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.255937 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4mftc" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.341245 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5l4\" (UniqueName: \"kubernetes.io/projected/b6397562-8380-4277-8f96-bf264f7049a2-kube-api-access-bp5l4\") pod \"barbican-db-sync-829jj\" (UID: \"b6397562-8380-4277-8f96-bf264f7049a2\") " pod="openstack/barbican-db-sync-829jj" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.341488 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6397562-8380-4277-8f96-bf264f7049a2-combined-ca-bundle\") pod \"barbican-db-sync-829jj\" (UID: \"b6397562-8380-4277-8f96-bf264f7049a2\") " pod="openstack/barbican-db-sync-829jj" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.341660 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6397562-8380-4277-8f96-bf264f7049a2-db-sync-config-data\") pod \"barbican-db-sync-829jj\" (UID: \"b6397562-8380-4277-8f96-bf264f7049a2\") " pod="openstack/barbican-db-sync-829jj" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.424148 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-pm9gl"] Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.426242 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pm9gl" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.428846 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.429185 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.429376 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vm2kp" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.432776 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pm9gl"] Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.442836 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp5l4\" (UniqueName: \"kubernetes.io/projected/b6397562-8380-4277-8f96-bf264f7049a2-kube-api-access-bp5l4\") pod \"barbican-db-sync-829jj\" (UID: \"b6397562-8380-4277-8f96-bf264f7049a2\") " pod="openstack/barbican-db-sync-829jj" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.442911 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6397562-8380-4277-8f96-bf264f7049a2-combined-ca-bundle\") pod \"barbican-db-sync-829jj\" (UID: \"b6397562-8380-4277-8f96-bf264f7049a2\") " pod="openstack/barbican-db-sync-829jj" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.443004 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6397562-8380-4277-8f96-bf264f7049a2-db-sync-config-data\") pod \"barbican-db-sync-829jj\" (UID: \"b6397562-8380-4277-8f96-bf264f7049a2\") " pod="openstack/barbican-db-sync-829jj" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.454308 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6397562-8380-4277-8f96-bf264f7049a2-combined-ca-bundle\") pod \"barbican-db-sync-829jj\" (UID: \"b6397562-8380-4277-8f96-bf264f7049a2\") " pod="openstack/barbican-db-sync-829jj" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.457777 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6397562-8380-4277-8f96-bf264f7049a2-db-sync-config-data\") pod \"barbican-db-sync-829jj\" (UID: \"b6397562-8380-4277-8f96-bf264f7049a2\") " pod="openstack/barbican-db-sync-829jj" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.458069 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp5l4\" (UniqueName: \"kubernetes.io/projected/b6397562-8380-4277-8f96-bf264f7049a2-kube-api-access-bp5l4\") pod \"barbican-db-sync-829jj\" (UID: \"b6397562-8380-4277-8f96-bf264f7049a2\") " pod="openstack/barbican-db-sync-829jj" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.545336 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrgwp\" (UniqueName: \"kubernetes.io/projected/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-kube-api-access-zrgwp\") pod \"neutron-db-sync-pm9gl\" (UID: \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\") " pod="openstack/neutron-db-sync-pm9gl" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.545635 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-config\") pod \"neutron-db-sync-pm9gl\" (UID: \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\") " pod="openstack/neutron-db-sync-pm9gl" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.545710 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-combined-ca-bundle\") pod \"neutron-db-sync-pm9gl\" (UID: \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\") " pod="openstack/neutron-db-sync-pm9gl" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.562334 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-829jj" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.646942 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-config\") pod \"neutron-db-sync-pm9gl\" (UID: \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\") " pod="openstack/neutron-db-sync-pm9gl" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.647023 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-combined-ca-bundle\") pod \"neutron-db-sync-pm9gl\" (UID: \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\") " pod="openstack/neutron-db-sync-pm9gl" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.647079 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrgwp\" (UniqueName: \"kubernetes.io/projected/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-kube-api-access-zrgwp\") pod \"neutron-db-sync-pm9gl\" (UID: \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\") " pod="openstack/neutron-db-sync-pm9gl" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.651210 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-combined-ca-bundle\") pod \"neutron-db-sync-pm9gl\" (UID: \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\") " pod="openstack/neutron-db-sync-pm9gl" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.652951 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-config\") pod \"neutron-db-sync-pm9gl\" (UID: \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\") " pod="openstack/neutron-db-sync-pm9gl" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.665475 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrgwp\" (UniqueName: \"kubernetes.io/projected/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-kube-api-access-zrgwp\") pod \"neutron-db-sync-pm9gl\" (UID: \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\") " pod="openstack/neutron-db-sync-pm9gl" Oct 04 05:06:33 crc kubenswrapper[4802]: I1004 05:06:33.750576 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pm9gl" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.491781 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.667792 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-combined-ca-bundle\") pod \"28fb85c9-6063-44b2-871a-4c39ae649b9c\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.668072 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-885cl\" (UniqueName: \"kubernetes.io/projected/28fb85c9-6063-44b2-871a-4c39ae649b9c-kube-api-access-885cl\") pod \"28fb85c9-6063-44b2-871a-4c39ae649b9c\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.668105 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-fernet-keys\") pod \"28fb85c9-6063-44b2-871a-4c39ae649b9c\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.668138 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-config-data\") pod \"28fb85c9-6063-44b2-871a-4c39ae649b9c\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.668173 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-scripts\") pod \"28fb85c9-6063-44b2-871a-4c39ae649b9c\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.668222 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-credential-keys\") pod \"28fb85c9-6063-44b2-871a-4c39ae649b9c\" (UID: \"28fb85c9-6063-44b2-871a-4c39ae649b9c\") " Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.677839 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-scripts" (OuterVolumeSpecName: "scripts") pod "28fb85c9-6063-44b2-871a-4c39ae649b9c" (UID: "28fb85c9-6063-44b2-871a-4c39ae649b9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.681465 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28fb85c9-6063-44b2-871a-4c39ae649b9c-kube-api-access-885cl" (OuterVolumeSpecName: "kube-api-access-885cl") pod "28fb85c9-6063-44b2-871a-4c39ae649b9c" (UID: "28fb85c9-6063-44b2-871a-4c39ae649b9c"). InnerVolumeSpecName "kube-api-access-885cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.683367 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "28fb85c9-6063-44b2-871a-4c39ae649b9c" (UID: "28fb85c9-6063-44b2-871a-4c39ae649b9c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.691424 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "28fb85c9-6063-44b2-871a-4c39ae649b9c" (UID: "28fb85c9-6063-44b2-871a-4c39ae649b9c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.739582 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28fb85c9-6063-44b2-871a-4c39ae649b9c" (UID: "28fb85c9-6063-44b2-871a-4c39ae649b9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.741038 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-config-data" (OuterVolumeSpecName: "config-data") pod "28fb85c9-6063-44b2-871a-4c39ae649b9c" (UID: "28fb85c9-6063-44b2-871a-4c39ae649b9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.769607 4802 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.769707 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.769722 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.769733 4802 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.769745 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28fb85c9-6063-44b2-871a-4c39ae649b9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.769760 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-885cl\" (UniqueName: \"kubernetes.io/projected/28fb85c9-6063-44b2-871a-4c39ae649b9c-kube-api-access-885cl\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:34 crc kubenswrapper[4802]: I1004 05:06:34.971388 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64457d9b48-l5jfj"] Oct 04 05:06:34 crc kubenswrapper[4802]: W1004 05:06:34.979616 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb69fc77_66b3_4f06_b438_8fbd159a4c3f.slice/crio-485efd6918708b33434abacb119a7de1f1adbe840a633f2dd0f77af3ce622538 WatchSource:0}: Error finding container 485efd6918708b33434abacb119a7de1f1adbe840a633f2dd0f77af3ce622538: Status 404 returned error can't find the container with id 485efd6918708b33434abacb119a7de1f1adbe840a633f2dd0f77af3ce622538 Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.022892 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ghjl7" event={"ID":"28fb85c9-6063-44b2-871a-4c39ae649b9c","Type":"ContainerDied","Data":"4a0fd06be72d3fefed93293ebec477fb3ffaf38986752f8acebfc7e82b3aac69"} Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.022961 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a0fd06be72d3fefed93293ebec477fb3ffaf38986752f8acebfc7e82b3aac69" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.022907 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ghjl7" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.028813 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f92d045a-efa3-4087-a8f6-940f4446c663","Type":"ContainerStarted","Data":"6c4a38e3c0959f7d445f999f384865d99b91b2a244df7c89d823d47f93eb9781"} Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.032858 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64457d9b48-l5jfj" event={"ID":"bb69fc77-66b3-4f06-b438-8fbd159a4c3f","Type":"ContainerStarted","Data":"485efd6918708b33434abacb119a7de1f1adbe840a633f2dd0f77af3ce622538"} Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.034587 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-829jj"] Oct 04 05:06:35 crc kubenswrapper[4802]: W1004 05:06:35.039958 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6397562_8380_4277_8f96_bf264f7049a2.slice/crio-fbd6f356a51e844590f131098484f9097bf94e17971fe755fbab266f8d0b0592 WatchSource:0}: Error finding container fbd6f356a51e844590f131098484f9097bf94e17971fe755fbab266f8d0b0592: Status 404 returned error can't find the container with id fbd6f356a51e844590f131098484f9097bf94e17971fe755fbab266f8d0b0592 Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.101603 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pm9gl"] Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.126656 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4mftc"] Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.590357 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f8c67fc4f-cnjpc"] Oct 04 05:06:35 crc kubenswrapper[4802]: E1004 05:06:35.591140 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28fb85c9-6063-44b2-871a-4c39ae649b9c" containerName="keystone-bootstrap" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.591160 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="28fb85c9-6063-44b2-871a-4c39ae649b9c" containerName="keystone-bootstrap" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.591373 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="28fb85c9-6063-44b2-871a-4c39ae649b9c" containerName="keystone-bootstrap" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.592042 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.596201 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.596453 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.596570 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.596704 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fxp4p" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.596749 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.608237 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.661783 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f8c67fc4f-cnjpc"] Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.682186 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-config-data\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.682234 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8lbg\" (UniqueName: \"kubernetes.io/projected/bb5643ab-5cdd-42fa-b96a-180d3137816d-kube-api-access-r8lbg\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.682291 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-internal-tls-certs\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.682319 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-combined-ca-bundle\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.682348 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-scripts\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.682375 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-fernet-keys\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.682411 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-credential-keys\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.682429 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-public-tls-certs\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.784049 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-internal-tls-certs\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.784107 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-combined-ca-bundle\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.784160 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-scripts\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.784192 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-fernet-keys\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.784226 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-credential-keys\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.784244 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-public-tls-certs\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.784278 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-config-data\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.784296 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8lbg\" (UniqueName: \"kubernetes.io/projected/bb5643ab-5cdd-42fa-b96a-180d3137816d-kube-api-access-r8lbg\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.789771 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-combined-ca-bundle\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.790386 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-credential-keys\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.791453 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-public-tls-certs\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.792625 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-internal-tls-certs\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.803042 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8lbg\" (UniqueName: \"kubernetes.io/projected/bb5643ab-5cdd-42fa-b96a-180d3137816d-kube-api-access-r8lbg\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.806045 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-config-data\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.809302 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-scripts\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.815944 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb5643ab-5cdd-42fa-b96a-180d3137816d-fernet-keys\") pod \"keystone-f8c67fc4f-cnjpc\" (UID: \"bb5643ab-5cdd-42fa-b96a-180d3137816d\") " pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:35 crc kubenswrapper[4802]: I1004 05:06:35.924712 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:36 crc kubenswrapper[4802]: I1004 05:06:36.060803 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pm9gl" event={"ID":"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1","Type":"ContainerStarted","Data":"063cb26e28fd4cbafe4288215ae4b857411b65df26a62d8f00036b1713918e2e"} Oct 04 05:06:36 crc kubenswrapper[4802]: I1004 05:06:36.060859 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pm9gl" event={"ID":"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1","Type":"ContainerStarted","Data":"2400ca8aa763e82a9b98d4bff7b1e404df716d7d7ca92923f4516cc454c67aaf"} Oct 04 05:06:36 crc kubenswrapper[4802]: I1004 05:06:36.064064 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4mftc" event={"ID":"f35c8191-842a-419f-8b4a-6f36bd01f6cd","Type":"ContainerStarted","Data":"7e0075814ee060f7c4444fccf75d70a83559fe169e7f31c020893dccd57076ed"} Oct 04 05:06:36 crc kubenswrapper[4802]: I1004 05:06:36.068784 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64457d9b48-l5jfj" event={"ID":"bb69fc77-66b3-4f06-b438-8fbd159a4c3f","Type":"ContainerStarted","Data":"76363a0043f52d871b1c5958c5f0ef228561c867c6a4a3f9009dc7ad919f7bc7"} Oct 04 05:06:36 crc kubenswrapper[4802]: I1004 05:06:36.068823 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64457d9b48-l5jfj" event={"ID":"bb69fc77-66b3-4f06-b438-8fbd159a4c3f","Type":"ContainerStarted","Data":"4659096b031a3d111ca2faedc48bcbc05b21cb5c4e6ea9300fd041e883e66b39"} Oct 04 05:06:36 crc kubenswrapper[4802]: I1004 05:06:36.069541 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:36 crc kubenswrapper[4802]: I1004 05:06:36.069575 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:06:36 crc kubenswrapper[4802]: I1004 05:06:36.071692 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-829jj" event={"ID":"b6397562-8380-4277-8f96-bf264f7049a2","Type":"ContainerStarted","Data":"fbd6f356a51e844590f131098484f9097bf94e17971fe755fbab266f8d0b0592"} Oct 04 05:06:36 crc kubenswrapper[4802]: I1004 05:06:36.120827 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-pm9gl" podStartSLOduration=3.120808227 podStartE2EDuration="3.120808227s" podCreationTimestamp="2025-10-04 05:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:06:36.099128921 +0000 UTC m=+1238.507129547" watchObservedRunningTime="2025-10-04 05:06:36.120808227 +0000 UTC m=+1238.528808852" Oct 04 05:06:36 crc kubenswrapper[4802]: I1004 05:06:36.124787 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-64457d9b48-l5jfj" podStartSLOduration=5.12477279 podStartE2EDuration="5.12477279s" podCreationTimestamp="2025-10-04 05:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:06:36.117790282 +0000 UTC m=+1238.525790907" watchObservedRunningTime="2025-10-04 05:06:36.12477279 +0000 UTC m=+1238.532773415" Oct 04 05:06:36 crc kubenswrapper[4802]: I1004 05:06:36.501036 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f8c67fc4f-cnjpc"] Oct 04 05:06:36 crc kubenswrapper[4802]: W1004 05:06:36.504601 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb5643ab_5cdd_42fa_b96a_180d3137816d.slice/crio-2ee6b840ee826922dc9b7aa4837e150b6416aae8aaed55b014b479ced2910f6f WatchSource:0}: Error finding container 2ee6b840ee826922dc9b7aa4837e150b6416aae8aaed55b014b479ced2910f6f: Status 404 returned error can't find the container with id 2ee6b840ee826922dc9b7aa4837e150b6416aae8aaed55b014b479ced2910f6f Oct 04 05:06:37 crc kubenswrapper[4802]: I1004 05:06:37.086089 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f8c67fc4f-cnjpc" event={"ID":"bb5643ab-5cdd-42fa-b96a-180d3137816d","Type":"ContainerStarted","Data":"85d5bcf827a75ade22a92c8599a62cc9dea9b5129fbf57e391f2087b76c54b1b"} Oct 04 05:06:37 crc kubenswrapper[4802]: I1004 05:06:37.087371 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:06:37 crc kubenswrapper[4802]: I1004 05:06:37.087397 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f8c67fc4f-cnjpc" event={"ID":"bb5643ab-5cdd-42fa-b96a-180d3137816d","Type":"ContainerStarted","Data":"2ee6b840ee826922dc9b7aa4837e150b6416aae8aaed55b014b479ced2910f6f"} Oct 04 05:06:37 crc kubenswrapper[4802]: I1004 05:06:37.117016 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f8c67fc4f-cnjpc" podStartSLOduration=2.116982683 podStartE2EDuration="2.116982683s" podCreationTimestamp="2025-10-04 05:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:06:37.110011485 +0000 UTC m=+1239.518012130" watchObservedRunningTime="2025-10-04 05:06:37.116982683 +0000 UTC m=+1239.524983308" Oct 04 05:06:52 crc kubenswrapper[4802]: I1004 05:06:52.663006 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:06:52 crc kubenswrapper[4802]: I1004 05:06:52.663536 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:06:52 crc kubenswrapper[4802]: E1004 05:06:52.684955 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Oct 04 05:06:52 crc kubenswrapper[4802]: E1004 05:06:52.685406 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghdwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4mftc_openstack(f35c8191-842a-419f-8b4a-6f36bd01f6cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 04 05:06:52 crc kubenswrapper[4802]: E1004 05:06:52.686603 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4mftc" podUID="f35c8191-842a-419f-8b4a-6f36bd01f6cd" Oct 04 05:06:53 crc kubenswrapper[4802]: E1004 05:06:53.217351 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-4mftc" podUID="f35c8191-842a-419f-8b4a-6f36bd01f6cd" Oct 04 05:06:53 crc kubenswrapper[4802]: E1004 05:06:53.592767 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Oct 04 05:06:53 crc kubenswrapper[4802]: E1004 05:06:53.592951 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbtsw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f92d045a-efa3-4087-a8f6-940f4446c663): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 04 05:06:53 crc kubenswrapper[4802]: E1004 05:06:53.594953 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" Oct 04 05:06:54 crc kubenswrapper[4802]: I1004 05:06:54.224708 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-829jj" event={"ID":"b6397562-8380-4277-8f96-bf264f7049a2","Type":"ContainerStarted","Data":"fbed82bfdf0bc8661a39f43bdd5de3db8e188e531dd977066bc7ec2b2a9ebc0f"} Oct 04 05:06:54 crc kubenswrapper[4802]: I1004 05:06:54.224880 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" containerName="ceilometer-central-agent" containerID="cri-o://13e45a000e483f1802485d1ae108e677a33a69015df66902e461afb3f3a6ff1f" gracePeriod=30 Oct 04 05:06:54 crc kubenswrapper[4802]: I1004 05:06:54.226002 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" containerName="ceilometer-notification-agent" containerID="cri-o://979193da9f8dc0ce7d143cf6deef7aa7512eca9e0c95d24443e17fd2bab30db6" gracePeriod=30 Oct 04 05:06:54 crc kubenswrapper[4802]: I1004 05:06:54.225304 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" containerName="sg-core" containerID="cri-o://6c4a38e3c0959f7d445f999f384865d99b91b2a244df7c89d823d47f93eb9781" gracePeriod=30 Oct 04 05:06:54 crc kubenswrapper[4802]: I1004 05:06:54.254352 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-829jj" podStartSLOduration=2.726655772 podStartE2EDuration="21.254331397s" podCreationTimestamp="2025-10-04 05:06:33 +0000 UTC" firstStartedPulling="2025-10-04 05:06:35.042699644 +0000 UTC m=+1237.450700269" lastFinishedPulling="2025-10-04 05:06:53.570375269 +0000 UTC m=+1255.978375894" observedRunningTime="2025-10-04 05:06:54.240259407 +0000 UTC m=+1256.648260052" watchObservedRunningTime="2025-10-04 05:06:54.254331397 +0000 UTC m=+1256.662332022" Oct 04 05:06:55 crc kubenswrapper[4802]: I1004 05:06:55.235829 4802 generic.go:334] "Generic (PLEG): container finished" podID="f92d045a-efa3-4087-a8f6-940f4446c663" containerID="6c4a38e3c0959f7d445f999f384865d99b91b2a244df7c89d823d47f93eb9781" exitCode=2 Oct 04 05:06:55 crc kubenswrapper[4802]: I1004 05:06:55.236203 4802 generic.go:334] "Generic (PLEG): container finished" podID="f92d045a-efa3-4087-a8f6-940f4446c663" containerID="13e45a000e483f1802485d1ae108e677a33a69015df66902e461afb3f3a6ff1f" exitCode=0 Oct 04 05:06:55 crc kubenswrapper[4802]: I1004 05:06:55.235890 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f92d045a-efa3-4087-a8f6-940f4446c663","Type":"ContainerDied","Data":"6c4a38e3c0959f7d445f999f384865d99b91b2a244df7c89d823d47f93eb9781"} Oct 04 05:06:55 crc kubenswrapper[4802]: I1004 05:06:55.236380 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f92d045a-efa3-4087-a8f6-940f4446c663","Type":"ContainerDied","Data":"13e45a000e483f1802485d1ae108e677a33a69015df66902e461afb3f3a6ff1f"} Oct 04 05:06:56 crc kubenswrapper[4802]: I1004 05:06:56.928475 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.102690 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbtsw\" (UniqueName: \"kubernetes.io/projected/f92d045a-efa3-4087-a8f6-940f4446c663-kube-api-access-qbtsw\") pod \"f92d045a-efa3-4087-a8f6-940f4446c663\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.102826 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f92d045a-efa3-4087-a8f6-940f4446c663-log-httpd\") pod \"f92d045a-efa3-4087-a8f6-940f4446c663\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.102914 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-combined-ca-bundle\") pod \"f92d045a-efa3-4087-a8f6-940f4446c663\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.102947 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-scripts\") pod \"f92d045a-efa3-4087-a8f6-940f4446c663\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.103021 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f92d045a-efa3-4087-a8f6-940f4446c663-run-httpd\") pod \"f92d045a-efa3-4087-a8f6-940f4446c663\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.103063 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-sg-core-conf-yaml\") pod \"f92d045a-efa3-4087-a8f6-940f4446c663\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.103103 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-config-data\") pod \"f92d045a-efa3-4087-a8f6-940f4446c663\" (UID: \"f92d045a-efa3-4087-a8f6-940f4446c663\") " Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.103481 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f92d045a-efa3-4087-a8f6-940f4446c663-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f92d045a-efa3-4087-a8f6-940f4446c663" (UID: "f92d045a-efa3-4087-a8f6-940f4446c663"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.104281 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f92d045a-efa3-4087-a8f6-940f4446c663-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f92d045a-efa3-4087-a8f6-940f4446c663" (UID: "f92d045a-efa3-4087-a8f6-940f4446c663"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.108922 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f92d045a-efa3-4087-a8f6-940f4446c663-kube-api-access-qbtsw" (OuterVolumeSpecName: "kube-api-access-qbtsw") pod "f92d045a-efa3-4087-a8f6-940f4446c663" (UID: "f92d045a-efa3-4087-a8f6-940f4446c663"). InnerVolumeSpecName "kube-api-access-qbtsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.114102 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-scripts" (OuterVolumeSpecName: "scripts") pod "f92d045a-efa3-4087-a8f6-940f4446c663" (UID: "f92d045a-efa3-4087-a8f6-940f4446c663"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.131548 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f92d045a-efa3-4087-a8f6-940f4446c663" (UID: "f92d045a-efa3-4087-a8f6-940f4446c663"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.163306 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f92d045a-efa3-4087-a8f6-940f4446c663" (UID: "f92d045a-efa3-4087-a8f6-940f4446c663"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.165547 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-config-data" (OuterVolumeSpecName: "config-data") pod "f92d045a-efa3-4087-a8f6-940f4446c663" (UID: "f92d045a-efa3-4087-a8f6-940f4446c663"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.204857 4802 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f92d045a-efa3-4087-a8f6-940f4446c663-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.205146 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.205167 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.205177 4802 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f92d045a-efa3-4087-a8f6-940f4446c663-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.205188 4802 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.205197 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92d045a-efa3-4087-a8f6-940f4446c663-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.205207 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbtsw\" (UniqueName: \"kubernetes.io/projected/f92d045a-efa3-4087-a8f6-940f4446c663-kube-api-access-qbtsw\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.254126 4802 generic.go:334] "Generic (PLEG): container finished" podID="f92d045a-efa3-4087-a8f6-940f4446c663" containerID="979193da9f8dc0ce7d143cf6deef7aa7512eca9e0c95d24443e17fd2bab30db6" exitCode=0 Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.254208 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.254209 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f92d045a-efa3-4087-a8f6-940f4446c663","Type":"ContainerDied","Data":"979193da9f8dc0ce7d143cf6deef7aa7512eca9e0c95d24443e17fd2bab30db6"} Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.254326 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f92d045a-efa3-4087-a8f6-940f4446c663","Type":"ContainerDied","Data":"fd234756a97ec6198eba765dfdccd263238d62372970fc1aa4d2526a6540d2c8"} Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.254350 4802 scope.go:117] "RemoveContainer" containerID="6c4a38e3c0959f7d445f999f384865d99b91b2a244df7c89d823d47f93eb9781" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.257738 4802 generic.go:334] "Generic (PLEG): container finished" podID="b6397562-8380-4277-8f96-bf264f7049a2" containerID="fbed82bfdf0bc8661a39f43bdd5de3db8e188e531dd977066bc7ec2b2a9ebc0f" exitCode=0 Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.257777 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-829jj" event={"ID":"b6397562-8380-4277-8f96-bf264f7049a2","Type":"ContainerDied","Data":"fbed82bfdf0bc8661a39f43bdd5de3db8e188e531dd977066bc7ec2b2a9ebc0f"} Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.275791 4802 scope.go:117] "RemoveContainer" containerID="979193da9f8dc0ce7d143cf6deef7aa7512eca9e0c95d24443e17fd2bab30db6" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.298000 4802 scope.go:117] "RemoveContainer" containerID="13e45a000e483f1802485d1ae108e677a33a69015df66902e461afb3f3a6ff1f" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.332243 4802 scope.go:117] "RemoveContainer" containerID="6c4a38e3c0959f7d445f999f384865d99b91b2a244df7c89d823d47f93eb9781" Oct 04 05:06:57 crc kubenswrapper[4802]: E1004 05:06:57.332897 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c4a38e3c0959f7d445f999f384865d99b91b2a244df7c89d823d47f93eb9781\": container with ID starting with 6c4a38e3c0959f7d445f999f384865d99b91b2a244df7c89d823d47f93eb9781 not found: ID does not exist" containerID="6c4a38e3c0959f7d445f999f384865d99b91b2a244df7c89d823d47f93eb9781" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.332942 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c4a38e3c0959f7d445f999f384865d99b91b2a244df7c89d823d47f93eb9781"} err="failed to get container status \"6c4a38e3c0959f7d445f999f384865d99b91b2a244df7c89d823d47f93eb9781\": rpc error: code = NotFound desc = could not find container \"6c4a38e3c0959f7d445f999f384865d99b91b2a244df7c89d823d47f93eb9781\": container with ID starting with 6c4a38e3c0959f7d445f999f384865d99b91b2a244df7c89d823d47f93eb9781 not found: ID does not exist" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.332961 4802 scope.go:117] "RemoveContainer" containerID="979193da9f8dc0ce7d143cf6deef7aa7512eca9e0c95d24443e17fd2bab30db6" Oct 04 05:06:57 crc kubenswrapper[4802]: E1004 05:06:57.333137 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"979193da9f8dc0ce7d143cf6deef7aa7512eca9e0c95d24443e17fd2bab30db6\": container with ID starting with 979193da9f8dc0ce7d143cf6deef7aa7512eca9e0c95d24443e17fd2bab30db6 not found: ID does not exist" containerID="979193da9f8dc0ce7d143cf6deef7aa7512eca9e0c95d24443e17fd2bab30db6" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.333371 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"979193da9f8dc0ce7d143cf6deef7aa7512eca9e0c95d24443e17fd2bab30db6"} err="failed to get container status \"979193da9f8dc0ce7d143cf6deef7aa7512eca9e0c95d24443e17fd2bab30db6\": rpc error: code = NotFound desc = could not find container \"979193da9f8dc0ce7d143cf6deef7aa7512eca9e0c95d24443e17fd2bab30db6\": container with ID starting with 979193da9f8dc0ce7d143cf6deef7aa7512eca9e0c95d24443e17fd2bab30db6 not found: ID does not exist" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.333394 4802 scope.go:117] "RemoveContainer" containerID="13e45a000e483f1802485d1ae108e677a33a69015df66902e461afb3f3a6ff1f" Oct 04 05:06:57 crc kubenswrapper[4802]: E1004 05:06:57.333566 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13e45a000e483f1802485d1ae108e677a33a69015df66902e461afb3f3a6ff1f\": container with ID starting with 13e45a000e483f1802485d1ae108e677a33a69015df66902e461afb3f3a6ff1f not found: ID does not exist" containerID="13e45a000e483f1802485d1ae108e677a33a69015df66902e461afb3f3a6ff1f" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.333588 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13e45a000e483f1802485d1ae108e677a33a69015df66902e461afb3f3a6ff1f"} err="failed to get container status \"13e45a000e483f1802485d1ae108e677a33a69015df66902e461afb3f3a6ff1f\": rpc error: code = NotFound desc = could not find container \"13e45a000e483f1802485d1ae108e677a33a69015df66902e461afb3f3a6ff1f\": container with ID starting with 13e45a000e483f1802485d1ae108e677a33a69015df66902e461afb3f3a6ff1f not found: ID does not exist" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.334091 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.345601 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.376570 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:06:57 crc kubenswrapper[4802]: E1004 05:06:57.377409 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" containerName="ceilometer-notification-agent" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.377429 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" containerName="ceilometer-notification-agent" Oct 04 05:06:57 crc kubenswrapper[4802]: E1004 05:06:57.377447 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" containerName="ceilometer-central-agent" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.377453 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" containerName="ceilometer-central-agent" Oct 04 05:06:57 crc kubenswrapper[4802]: E1004 05:06:57.377460 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" containerName="sg-core" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.377466 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" containerName="sg-core" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.377807 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" containerName="ceilometer-notification-agent" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.377827 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" containerName="ceilometer-central-agent" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.377841 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" containerName="sg-core" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.382714 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.385753 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.385481 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.388472 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.510505 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-scripts\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.510555 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c27d-3446-4947-ae13-7cb899a83c71-run-httpd\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.510623 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.510681 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.510706 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c27d-3446-4947-ae13-7cb899a83c71-log-httpd\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.510744 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-config-data\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.511034 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp256\" (UniqueName: \"kubernetes.io/projected/5039c27d-3446-4947-ae13-7cb899a83c71-kube-api-access-kp256\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.612975 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-scripts\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.613010 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c27d-3446-4947-ae13-7cb899a83c71-run-httpd\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.613035 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.613062 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.613076 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c27d-3446-4947-ae13-7cb899a83c71-log-httpd\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.613107 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-config-data\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.613155 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp256\" (UniqueName: \"kubernetes.io/projected/5039c27d-3446-4947-ae13-7cb899a83c71-kube-api-access-kp256\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.613549 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c27d-3446-4947-ae13-7cb899a83c71-run-httpd\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.613860 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c27d-3446-4947-ae13-7cb899a83c71-log-httpd\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.617853 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.618012 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-scripts\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.618252 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-config-data\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.619100 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.631402 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp256\" (UniqueName: \"kubernetes.io/projected/5039c27d-3446-4947-ae13-7cb899a83c71-kube-api-access-kp256\") pod \"ceilometer-0\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " pod="openstack/ceilometer-0" Oct 04 05:06:57 crc kubenswrapper[4802]: I1004 05:06:57.705664 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:06:58 crc kubenswrapper[4802]: I1004 05:06:58.129744 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:06:58 crc kubenswrapper[4802]: W1004 05:06:58.144166 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5039c27d_3446_4947_ae13_7cb899a83c71.slice/crio-bf8b348cba0ef712c5917c6d8041f86528e570a277f1c3f08f96cd9db5064eb1 WatchSource:0}: Error finding container bf8b348cba0ef712c5917c6d8041f86528e570a277f1c3f08f96cd9db5064eb1: Status 404 returned error can't find the container with id bf8b348cba0ef712c5917c6d8041f86528e570a277f1c3f08f96cd9db5064eb1 Oct 04 05:06:58 crc kubenswrapper[4802]: I1004 05:06:58.266068 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5039c27d-3446-4947-ae13-7cb899a83c71","Type":"ContainerStarted","Data":"bf8b348cba0ef712c5917c6d8041f86528e570a277f1c3f08f96cd9db5064eb1"} Oct 04 05:06:58 crc kubenswrapper[4802]: I1004 05:06:58.373448 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f92d045a-efa3-4087-a8f6-940f4446c663" path="/var/lib/kubelet/pods/f92d045a-efa3-4087-a8f6-940f4446c663/volumes" Oct 04 05:06:58 crc kubenswrapper[4802]: I1004 05:06:58.490796 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-829jj" Oct 04 05:06:58 crc kubenswrapper[4802]: I1004 05:06:58.629083 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6397562-8380-4277-8f96-bf264f7049a2-combined-ca-bundle\") pod \"b6397562-8380-4277-8f96-bf264f7049a2\" (UID: \"b6397562-8380-4277-8f96-bf264f7049a2\") " Oct 04 05:06:58 crc kubenswrapper[4802]: I1004 05:06:58.629262 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp5l4\" (UniqueName: \"kubernetes.io/projected/b6397562-8380-4277-8f96-bf264f7049a2-kube-api-access-bp5l4\") pod \"b6397562-8380-4277-8f96-bf264f7049a2\" (UID: \"b6397562-8380-4277-8f96-bf264f7049a2\") " Oct 04 05:06:58 crc kubenswrapper[4802]: I1004 05:06:58.629313 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6397562-8380-4277-8f96-bf264f7049a2-db-sync-config-data\") pod \"b6397562-8380-4277-8f96-bf264f7049a2\" (UID: \"b6397562-8380-4277-8f96-bf264f7049a2\") " Oct 04 05:06:58 crc kubenswrapper[4802]: I1004 05:06:58.634553 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6397562-8380-4277-8f96-bf264f7049a2-kube-api-access-bp5l4" (OuterVolumeSpecName: "kube-api-access-bp5l4") pod "b6397562-8380-4277-8f96-bf264f7049a2" (UID: "b6397562-8380-4277-8f96-bf264f7049a2"). InnerVolumeSpecName "kube-api-access-bp5l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:06:58 crc kubenswrapper[4802]: I1004 05:06:58.634587 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6397562-8380-4277-8f96-bf264f7049a2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b6397562-8380-4277-8f96-bf264f7049a2" (UID: "b6397562-8380-4277-8f96-bf264f7049a2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:58 crc kubenswrapper[4802]: I1004 05:06:58.652823 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6397562-8380-4277-8f96-bf264f7049a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6397562-8380-4277-8f96-bf264f7049a2" (UID: "b6397562-8380-4277-8f96-bf264f7049a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:06:58 crc kubenswrapper[4802]: I1004 05:06:58.731585 4802 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6397562-8380-4277-8f96-bf264f7049a2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:58 crc kubenswrapper[4802]: I1004 05:06:58.731617 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6397562-8380-4277-8f96-bf264f7049a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:58 crc kubenswrapper[4802]: I1004 05:06:58.731626 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp5l4\" (UniqueName: \"kubernetes.io/projected/b6397562-8380-4277-8f96-bf264f7049a2-kube-api-access-bp5l4\") on node \"crc\" DevicePath \"\"" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.275418 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-829jj" event={"ID":"b6397562-8380-4277-8f96-bf264f7049a2","Type":"ContainerDied","Data":"fbd6f356a51e844590f131098484f9097bf94e17971fe755fbab266f8d0b0592"} Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.275798 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbd6f356a51e844590f131098484f9097bf94e17971fe755fbab266f8d0b0592" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.275699 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-829jj" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.276858 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5039c27d-3446-4947-ae13-7cb899a83c71","Type":"ContainerStarted","Data":"7d203a3fb8c5895bf131c4ec74184353486aa9a470cba3ded4cada90689dde01"} Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.546115 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-f55c6f5ff-whrkn"] Oct 04 05:06:59 crc kubenswrapper[4802]: E1004 05:06:59.546806 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6397562-8380-4277-8f96-bf264f7049a2" containerName="barbican-db-sync" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.546830 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6397562-8380-4277-8f96-bf264f7049a2" containerName="barbican-db-sync" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.547040 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6397562-8380-4277-8f96-bf264f7049a2" containerName="barbican-db-sync" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.548088 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.550548 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.550678 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.552258 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-686zn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.580996 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f55c6f5ff-whrkn"] Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.605795 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-88b556fb6-qrcjc"] Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.607088 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.610508 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.631203 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-88b556fb6-qrcjc"] Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.641831 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699df9757c-8rvpx"] Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.647180 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-config-data\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.647287 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdzgx\" (UniqueName: \"kubernetes.io/projected/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-kube-api-access-sdzgx\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.647336 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-combined-ca-bundle\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.647363 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-config-data-custom\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.647421 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-logs\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.647666 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.711859 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-8rvpx"] Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.748812 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89ef05b-4ac0-495f-886e-ab8d4c37195d-combined-ca-bundle\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.748875 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdzgx\" (UniqueName: \"kubernetes.io/projected/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-kube-api-access-sdzgx\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.748904 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-dns-svc\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.748924 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-combined-ca-bundle\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.749403 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.749432 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-config-data-custom\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.749458 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-config\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.749486 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfnxd\" (UniqueName: \"kubernetes.io/projected/e89ef05b-4ac0-495f-886e-ab8d4c37195d-kube-api-access-nfnxd\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.749508 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-logs\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.749535 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e89ef05b-4ac0-495f-886e-ab8d4c37195d-config-data-custom\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.749566 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e89ef05b-4ac0-495f-886e-ab8d4c37195d-logs\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.749611 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89ef05b-4ac0-495f-886e-ab8d4c37195d-config-data\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.749630 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-config-data\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.749684 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.749704 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shv58\" (UniqueName: \"kubernetes.io/projected/33420dae-9327-4060-9047-d04b83d925be-kube-api-access-shv58\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.751494 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-logs\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.755139 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-combined-ca-bundle\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.756038 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-config-data\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.760918 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-config-data-custom\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.771545 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdzgx\" (UniqueName: \"kubernetes.io/projected/bdbd3c17-f18e-4a9d-a65e-1d199459c0f3-kube-api-access-sdzgx\") pod \"barbican-worker-f55c6f5ff-whrkn\" (UID: \"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3\") " pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.819278 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8446845c4b-xzvk8"] Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.821543 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.823873 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.833136 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8446845c4b-xzvk8"] Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.852082 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89ef05b-4ac0-495f-886e-ab8d4c37195d-config-data\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.852148 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.852169 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shv58\" (UniqueName: \"kubernetes.io/projected/33420dae-9327-4060-9047-d04b83d925be-kube-api-access-shv58\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.852192 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89ef05b-4ac0-495f-886e-ab8d4c37195d-combined-ca-bundle\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.852225 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-dns-svc\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.852247 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.852274 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-config\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.852301 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfnxd\" (UniqueName: \"kubernetes.io/projected/e89ef05b-4ac0-495f-886e-ab8d4c37195d-kube-api-access-nfnxd\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.852328 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e89ef05b-4ac0-495f-886e-ab8d4c37195d-config-data-custom\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.852354 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e89ef05b-4ac0-495f-886e-ab8d4c37195d-logs\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.852750 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e89ef05b-4ac0-495f-886e-ab8d4c37195d-logs\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.853259 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-dns-svc\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.853340 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.853774 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-config\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.857015 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.862251 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89ef05b-4ac0-495f-886e-ab8d4c37195d-config-data\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.863342 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89ef05b-4ac0-495f-886e-ab8d4c37195d-combined-ca-bundle\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.879319 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shv58\" (UniqueName: \"kubernetes.io/projected/33420dae-9327-4060-9047-d04b83d925be-kube-api-access-shv58\") pod \"dnsmasq-dns-699df9757c-8rvpx\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.881275 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e89ef05b-4ac0-495f-886e-ab8d4c37195d-config-data-custom\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.884201 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f55c6f5ff-whrkn" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.888971 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfnxd\" (UniqueName: \"kubernetes.io/projected/e89ef05b-4ac0-495f-886e-ab8d4c37195d-kube-api-access-nfnxd\") pod \"barbican-keystone-listener-88b556fb6-qrcjc\" (UID: \"e89ef05b-4ac0-495f-886e-ab8d4c37195d\") " pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.925650 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.953859 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-config-data-custom\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.953907 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90403c7e-9344-4d4f-a0f1-80797f1cab83-logs\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.953995 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8vkp\" (UniqueName: \"kubernetes.io/projected/90403c7e-9344-4d4f-a0f1-80797f1cab83-kube-api-access-k8vkp\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.954028 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-combined-ca-bundle\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.954070 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-config-data\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:06:59 crc kubenswrapper[4802]: I1004 05:06:59.972748 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.055835 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8vkp\" (UniqueName: \"kubernetes.io/projected/90403c7e-9344-4d4f-a0f1-80797f1cab83-kube-api-access-k8vkp\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.055889 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-combined-ca-bundle\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.055927 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-config-data\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.056012 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-config-data-custom\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.056037 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90403c7e-9344-4d4f-a0f1-80797f1cab83-logs\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.056508 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90403c7e-9344-4d4f-a0f1-80797f1cab83-logs\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.064747 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-combined-ca-bundle\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.065688 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-config-data\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.066267 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-config-data-custom\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.082443 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8vkp\" (UniqueName: \"kubernetes.io/projected/90403c7e-9344-4d4f-a0f1-80797f1cab83-kube-api-access-k8vkp\") pod \"barbican-api-8446845c4b-xzvk8\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.247154 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.328519 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f55c6f5ff-whrkn"] Oct 04 05:07:00 crc kubenswrapper[4802]: W1004 05:07:00.347687 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdbd3c17_f18e_4a9d_a65e_1d199459c0f3.slice/crio-ec65a4957c29dad20bfff31f7bbc1070948062dc0286b6c617e067cad13f65a5 WatchSource:0}: Error finding container ec65a4957c29dad20bfff31f7bbc1070948062dc0286b6c617e067cad13f65a5: Status 404 returned error can't find the container with id ec65a4957c29dad20bfff31f7bbc1070948062dc0286b6c617e067cad13f65a5 Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.481495 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-88b556fb6-qrcjc"] Oct 04 05:07:00 crc kubenswrapper[4802]: W1004 05:07:00.481815 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode89ef05b_4ac0_495f_886e_ab8d4c37195d.slice/crio-6324804734bea765975a22e46dce4eeb0decaad24d05fa2776ab0ebb9624ea83 WatchSource:0}: Error finding container 6324804734bea765975a22e46dce4eeb0decaad24d05fa2776ab0ebb9624ea83: Status 404 returned error can't find the container with id 6324804734bea765975a22e46dce4eeb0decaad24d05fa2776ab0ebb9624ea83 Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.569913 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-8rvpx"] Oct 04 05:07:00 crc kubenswrapper[4802]: I1004 05:07:00.616622 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8446845c4b-xzvk8"] Oct 04 05:07:00 crc kubenswrapper[4802]: W1004 05:07:00.629563 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90403c7e_9344_4d4f_a0f1_80797f1cab83.slice/crio-7983588d55ac16d7b707017c29a03079d153ea81e2a1db8a259e887b4141e0d5 WatchSource:0}: Error finding container 7983588d55ac16d7b707017c29a03079d153ea81e2a1db8a259e887b4141e0d5: Status 404 returned error can't find the container with id 7983588d55ac16d7b707017c29a03079d153ea81e2a1db8a259e887b4141e0d5 Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.318436 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5039c27d-3446-4947-ae13-7cb899a83c71","Type":"ContainerStarted","Data":"9b52750d4d5be522242a54f35a91878568173e4ea0ea28bcccf1fc3b5c689179"} Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.328509 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" event={"ID":"e89ef05b-4ac0-495f-886e-ab8d4c37195d","Type":"ContainerStarted","Data":"6324804734bea765975a22e46dce4eeb0decaad24d05fa2776ab0ebb9624ea83"} Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.333767 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f55c6f5ff-whrkn" event={"ID":"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3","Type":"ContainerStarted","Data":"ec65a4957c29dad20bfff31f7bbc1070948062dc0286b6c617e067cad13f65a5"} Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.339223 4802 generic.go:334] "Generic (PLEG): container finished" podID="b27caca0-bc37-41bd-a5fb-3536cfc1dfa1" containerID="063cb26e28fd4cbafe4288215ae4b857411b65df26a62d8f00036b1713918e2e" exitCode=0 Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.339303 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pm9gl" event={"ID":"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1","Type":"ContainerDied","Data":"063cb26e28fd4cbafe4288215ae4b857411b65df26a62d8f00036b1713918e2e"} Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.341903 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8446845c4b-xzvk8" event={"ID":"90403c7e-9344-4d4f-a0f1-80797f1cab83","Type":"ContainerStarted","Data":"c82b8a8b985dee6ff468be90a4dce68561d04564c9609a3c4c12ebda4ba42f52"} Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.341944 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8446845c4b-xzvk8" event={"ID":"90403c7e-9344-4d4f-a0f1-80797f1cab83","Type":"ContainerStarted","Data":"afdaee336383e2eac9cda2584553208a237b59234d55ccab5f93a57b4d6c78fc"} Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.341955 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8446845c4b-xzvk8" event={"ID":"90403c7e-9344-4d4f-a0f1-80797f1cab83","Type":"ContainerStarted","Data":"7983588d55ac16d7b707017c29a03079d153ea81e2a1db8a259e887b4141e0d5"} Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.342483 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.342526 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.356729 4802 generic.go:334] "Generic (PLEG): container finished" podID="33420dae-9327-4060-9047-d04b83d925be" containerID="a1d36be04ca6e504bd705981c737a731da72a9a9d0188a9f2f4b0a46dfd4126d" exitCode=0 Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.356773 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-8rvpx" event={"ID":"33420dae-9327-4060-9047-d04b83d925be","Type":"ContainerDied","Data":"a1d36be04ca6e504bd705981c737a731da72a9a9d0188a9f2f4b0a46dfd4126d"} Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.356813 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-8rvpx" event={"ID":"33420dae-9327-4060-9047-d04b83d925be","Type":"ContainerStarted","Data":"32b95bc1cd68e1519694ad68ca147da72e9c99b45384ba37b196ffe671159aad"} Oct 04 05:07:01 crc kubenswrapper[4802]: I1004 05:07:01.375291 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8446845c4b-xzvk8" podStartSLOduration=2.3752737760000002 podStartE2EDuration="2.375273776s" podCreationTimestamp="2025-10-04 05:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:07:01.371385636 +0000 UTC m=+1263.779386271" watchObservedRunningTime="2025-10-04 05:07:01.375273776 +0000 UTC m=+1263.783274401" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.245154 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5d4c9d8df8-pp97l"] Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.247277 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.249427 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.249547 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.276139 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d4c9d8df8-pp97l"] Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.371243 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-8rvpx" event={"ID":"33420dae-9327-4060-9047-d04b83d925be","Type":"ContainerStarted","Data":"cb03c590187a2ad6ebbab475b0e54c0a222e0e4201e11161ff9f1cc545f06c84"} Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.400549 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699df9757c-8rvpx" podStartSLOduration=3.400530617 podStartE2EDuration="3.400530617s" podCreationTimestamp="2025-10-04 05:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:07:02.396347568 +0000 UTC m=+1264.804348203" watchObservedRunningTime="2025-10-04 05:07:02.400530617 +0000 UTC m=+1264.808531242" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.401630 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-combined-ca-bundle\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.401724 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-config-data-custom\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.401770 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-config-data\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.401816 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18615f98-63ad-48ee-83c3-1caeee1be993-logs\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.402010 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-public-tls-certs\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.402142 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-internal-tls-certs\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.402439 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqt7k\" (UniqueName: \"kubernetes.io/projected/18615f98-63ad-48ee-83c3-1caeee1be993-kube-api-access-wqt7k\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.503739 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqt7k\" (UniqueName: \"kubernetes.io/projected/18615f98-63ad-48ee-83c3-1caeee1be993-kube-api-access-wqt7k\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.504064 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-combined-ca-bundle\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.504107 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-config-data-custom\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.504162 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-config-data\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.504214 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18615f98-63ad-48ee-83c3-1caeee1be993-logs\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.504250 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-public-tls-certs\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.504288 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-internal-tls-certs\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.507262 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18615f98-63ad-48ee-83c3-1caeee1be993-logs\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.520437 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-public-tls-certs\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.520598 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-combined-ca-bundle\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.521197 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-config-data\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.521989 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-config-data-custom\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.523398 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqt7k\" (UniqueName: \"kubernetes.io/projected/18615f98-63ad-48ee-83c3-1caeee1be993-kube-api-access-wqt7k\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.527866 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18615f98-63ad-48ee-83c3-1caeee1be993-internal-tls-certs\") pod \"barbican-api-5d4c9d8df8-pp97l\" (UID: \"18615f98-63ad-48ee-83c3-1caeee1be993\") " pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.545230 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.547658 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64457d9b48-l5jfj" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.620314 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.781516 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pm9gl" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.913027 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-config\") pod \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\" (UID: \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\") " Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.913380 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-combined-ca-bundle\") pod \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\" (UID: \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\") " Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.913407 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrgwp\" (UniqueName: \"kubernetes.io/projected/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-kube-api-access-zrgwp\") pod \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\" (UID: \"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1\") " Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.932940 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-kube-api-access-zrgwp" (OuterVolumeSpecName: "kube-api-access-zrgwp") pod "b27caca0-bc37-41bd-a5fb-3536cfc1dfa1" (UID: "b27caca0-bc37-41bd-a5fb-3536cfc1dfa1"). InnerVolumeSpecName "kube-api-access-zrgwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.953819 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b27caca0-bc37-41bd-a5fb-3536cfc1dfa1" (UID: "b27caca0-bc37-41bd-a5fb-3536cfc1dfa1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:02 crc kubenswrapper[4802]: I1004 05:07:02.965515 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-config" (OuterVolumeSpecName: "config") pod "b27caca0-bc37-41bd-a5fb-3536cfc1dfa1" (UID: "b27caca0-bc37-41bd-a5fb-3536cfc1dfa1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.015304 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.015340 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.015353 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrgwp\" (UniqueName: \"kubernetes.io/projected/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1-kube-api-access-zrgwp\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.142598 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d4c9d8df8-pp97l"] Oct 04 05:07:03 crc kubenswrapper[4802]: W1004 05:07:03.145585 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18615f98_63ad_48ee_83c3_1caeee1be993.slice/crio-82146939dfeab7468c2b4ee3fd1ee6cabe5c1086046680b4c3b4ed38446bc3fa WatchSource:0}: Error finding container 82146939dfeab7468c2b4ee3fd1ee6cabe5c1086046680b4c3b4ed38446bc3fa: Status 404 returned error can't find the container with id 82146939dfeab7468c2b4ee3fd1ee6cabe5c1086046680b4c3b4ed38446bc3fa Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.379557 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" event={"ID":"e89ef05b-4ac0-495f-886e-ab8d4c37195d","Type":"ContainerStarted","Data":"6c50bccaa259f5be8e6354ab0b01d444fd202f4f1c61646ea63358edc064c239"} Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.379608 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" event={"ID":"e89ef05b-4ac0-495f-886e-ab8d4c37195d","Type":"ContainerStarted","Data":"f56817f07fd27be4b267fc77a5f9000713af0d87006ac3008d88bf3bad78a657"} Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.385848 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5039c27d-3446-4947-ae13-7cb899a83c71","Type":"ContainerStarted","Data":"5e6f675210f2c7e5c66e6b93525b15b615bc9b84c7e973d543604550e82acc0a"} Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.388067 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f55c6f5ff-whrkn" event={"ID":"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3","Type":"ContainerStarted","Data":"a285415d97456bb110cdf7ee901a42693f46146890af32bdce82ca377080af48"} Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.388109 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f55c6f5ff-whrkn" event={"ID":"bdbd3c17-f18e-4a9d-a65e-1d199459c0f3","Type":"ContainerStarted","Data":"b2a440b252b396d4f19066f6679dafc5b09bf2856ab127d57a4c813176c12c03"} Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.394390 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pm9gl" event={"ID":"b27caca0-bc37-41bd-a5fb-3536cfc1dfa1","Type":"ContainerDied","Data":"2400ca8aa763e82a9b98d4bff7b1e404df716d7d7ca92923f4516cc454c67aaf"} Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.394426 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2400ca8aa763e82a9b98d4bff7b1e404df716d7d7ca92923f4516cc454c67aaf" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.394490 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pm9gl" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.404315 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-88b556fb6-qrcjc" podStartSLOduration=2.5212485620000002 podStartE2EDuration="4.404296098s" podCreationTimestamp="2025-10-04 05:06:59 +0000 UTC" firstStartedPulling="2025-10-04 05:07:00.487129999 +0000 UTC m=+1262.895130624" lastFinishedPulling="2025-10-04 05:07:02.370177535 +0000 UTC m=+1264.778178160" observedRunningTime="2025-10-04 05:07:03.395734185 +0000 UTC m=+1265.803734810" watchObservedRunningTime="2025-10-04 05:07:03.404296098 +0000 UTC m=+1265.812296723" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.421735 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d4c9d8df8-pp97l" event={"ID":"18615f98-63ad-48ee-83c3-1caeee1be993","Type":"ContainerStarted","Data":"7182330dffabd33810a91788844d657413fa86b5f024a1e0bcd445d26f2d2956"} Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.421775 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d4c9d8df8-pp97l" event={"ID":"18615f98-63ad-48ee-83c3-1caeee1be993","Type":"ContainerStarted","Data":"82146939dfeab7468c2b4ee3fd1ee6cabe5c1086046680b4c3b4ed38446bc3fa"} Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.423169 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.425456 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-f55c6f5ff-whrkn" podStartSLOduration=2.411896806 podStartE2EDuration="4.425443769s" podCreationTimestamp="2025-10-04 05:06:59 +0000 UTC" firstStartedPulling="2025-10-04 05:07:00.358551757 +0000 UTC m=+1262.766552382" lastFinishedPulling="2025-10-04 05:07:02.37209872 +0000 UTC m=+1264.780099345" observedRunningTime="2025-10-04 05:07:03.424474772 +0000 UTC m=+1265.832475387" watchObservedRunningTime="2025-10-04 05:07:03.425443769 +0000 UTC m=+1265.833444404" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.621503 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-8rvpx"] Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.673736 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-w7w7n"] Oct 04 05:07:03 crc kubenswrapper[4802]: E1004 05:07:03.674412 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27caca0-bc37-41bd-a5fb-3536cfc1dfa1" containerName="neutron-db-sync" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.674431 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27caca0-bc37-41bd-a5fb-3536cfc1dfa1" containerName="neutron-db-sync" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.674598 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27caca0-bc37-41bd-a5fb-3536cfc1dfa1" containerName="neutron-db-sync" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.675564 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.699991 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-w7w7n"] Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.758382 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8b55b6676-mqh5g"] Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.759727 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.764299 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.764538 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.764802 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.765355 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vm2kp" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.776448 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8b55b6676-mqh5g"] Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.828794 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-httpd-config\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.828856 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-combined-ca-bundle\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.828901 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-ovndb-tls-certs\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.828924 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-config\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.828975 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.829014 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-config\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.829051 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj7gz\" (UniqueName: \"kubernetes.io/projected/72327efa-0833-4cf0-bc74-72186ffab61d-kube-api-access-dj7gz\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.829088 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.829114 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmwtw\" (UniqueName: \"kubernetes.io/projected/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-kube-api-access-bmwtw\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.829141 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-dns-svc\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.930690 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-httpd-config\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.930754 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-combined-ca-bundle\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.930817 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-ovndb-tls-certs\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.930841 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-config\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.930893 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.930934 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-config\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.930985 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj7gz\" (UniqueName: \"kubernetes.io/projected/72327efa-0833-4cf0-bc74-72186ffab61d-kube-api-access-dj7gz\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.931033 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.931066 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmwtw\" (UniqueName: \"kubernetes.io/projected/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-kube-api-access-bmwtw\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.931105 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-dns-svc\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.932405 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-dns-svc\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.932413 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.932703 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.933277 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-config\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.942677 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-ovndb-tls-certs\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.943096 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-combined-ca-bundle\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.949337 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-httpd-config\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.952154 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-config\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.965403 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmwtw\" (UniqueName: \"kubernetes.io/projected/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-kube-api-access-bmwtw\") pod \"dnsmasq-dns-6bb684768f-w7w7n\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:03 crc kubenswrapper[4802]: I1004 05:07:03.971439 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj7gz\" (UniqueName: \"kubernetes.io/projected/72327efa-0833-4cf0-bc74-72186ffab61d-kube-api-access-dj7gz\") pod \"neutron-8b55b6676-mqh5g\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:04 crc kubenswrapper[4802]: I1004 05:07:04.013869 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:04 crc kubenswrapper[4802]: I1004 05:07:04.088131 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:04 crc kubenswrapper[4802]: I1004 05:07:04.389433 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-w7w7n"] Oct 04 05:07:04 crc kubenswrapper[4802]: I1004 05:07:04.458490 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d4c9d8df8-pp97l" event={"ID":"18615f98-63ad-48ee-83c3-1caeee1be993","Type":"ContainerStarted","Data":"e8525c2d0d254d1703f67eb3657c24e45b5c97259fda6a6c3e5fcc1ae64c809e"} Oct 04 05:07:04 crc kubenswrapper[4802]: I1004 05:07:04.458865 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:04 crc kubenswrapper[4802]: I1004 05:07:04.458915 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:04 crc kubenswrapper[4802]: I1004 05:07:04.467179 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" event={"ID":"c3feeb57-c944-48d9-ac9b-d66991cb5bf4","Type":"ContainerStarted","Data":"6d62f092b66fac52edc7722e1cb3a808bec8878faab4be1ebd40ad7357cd3e38"} Oct 04 05:07:04 crc kubenswrapper[4802]: I1004 05:07:04.493160 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5d4c9d8df8-pp97l" podStartSLOduration=2.493141567 podStartE2EDuration="2.493141567s" podCreationTimestamp="2025-10-04 05:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:07:04.480034004 +0000 UTC m=+1266.888034639" watchObservedRunningTime="2025-10-04 05:07:04.493141567 +0000 UTC m=+1266.901142192" Oct 04 05:07:05 crc kubenswrapper[4802]: I1004 05:07:05.471457 4802 generic.go:334] "Generic (PLEG): container finished" podID="c3feeb57-c944-48d9-ac9b-d66991cb5bf4" containerID="6c1b3e18636973c8ab11413d89236799a8d6413679a182e7eebfde1917a083bb" exitCode=0 Oct 04 05:07:05 crc kubenswrapper[4802]: I1004 05:07:05.471502 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" event={"ID":"c3feeb57-c944-48d9-ac9b-d66991cb5bf4","Type":"ContainerDied","Data":"6c1b3e18636973c8ab11413d89236799a8d6413679a182e7eebfde1917a083bb"} Oct 04 05:07:05 crc kubenswrapper[4802]: I1004 05:07:05.473938 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699df9757c-8rvpx" podUID="33420dae-9327-4060-9047-d04b83d925be" containerName="dnsmasq-dns" containerID="cri-o://cb03c590187a2ad6ebbab475b0e54c0a222e0e4201e11161ff9f1cc545f06c84" gracePeriod=10 Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.351339 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77c7dfb8d9-7pqjl"] Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.353050 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.358848 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.359387 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.406176 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77c7dfb8d9-7pqjl"] Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.485332 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-config\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.485376 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-httpd-config\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.485586 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-combined-ca-bundle\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.485606 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jms4\" (UniqueName: \"kubernetes.io/projected/220afca5-a3fb-496f-94b8-9f0123f0393f-kube-api-access-5jms4\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.485992 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-public-tls-certs\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.486017 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-ovndb-tls-certs\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.486052 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-internal-tls-certs\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.502608 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8b55b6676-mqh5g"] Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.510862 4802 generic.go:334] "Generic (PLEG): container finished" podID="33420dae-9327-4060-9047-d04b83d925be" containerID="cb03c590187a2ad6ebbab475b0e54c0a222e0e4201e11161ff9f1cc545f06c84" exitCode=0 Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.510913 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-8rvpx" event={"ID":"33420dae-9327-4060-9047-d04b83d925be","Type":"ContainerDied","Data":"cb03c590187a2ad6ebbab475b0e54c0a222e0e4201e11161ff9f1cc545f06c84"} Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.590555 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-httpd-config\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.590699 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-combined-ca-bundle\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.590720 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jms4\" (UniqueName: \"kubernetes.io/projected/220afca5-a3fb-496f-94b8-9f0123f0393f-kube-api-access-5jms4\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.590750 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-public-tls-certs\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.590767 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-ovndb-tls-certs\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.590794 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-internal-tls-certs\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.590821 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-config\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.605172 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-internal-tls-certs\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.607881 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-ovndb-tls-certs\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.610430 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-config\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.615335 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-httpd-config\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.615540 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-public-tls-certs\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.621934 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220afca5-a3fb-496f-94b8-9f0123f0393f-combined-ca-bundle\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.631452 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jms4\" (UniqueName: \"kubernetes.io/projected/220afca5-a3fb-496f-94b8-9f0123f0393f-kube-api-access-5jms4\") pod \"neutron-77c7dfb8d9-7pqjl\" (UID: \"220afca5-a3fb-496f-94b8-9f0123f0393f\") " pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.661050 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:06 crc kubenswrapper[4802]: I1004 05:07:06.909299 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.003556 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-config\") pod \"33420dae-9327-4060-9047-d04b83d925be\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.003983 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-ovsdbserver-sb\") pod \"33420dae-9327-4060-9047-d04b83d925be\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.004092 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-dns-svc\") pod \"33420dae-9327-4060-9047-d04b83d925be\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.007496 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shv58\" (UniqueName: \"kubernetes.io/projected/33420dae-9327-4060-9047-d04b83d925be-kube-api-access-shv58\") pod \"33420dae-9327-4060-9047-d04b83d925be\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.007566 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-ovsdbserver-nb\") pod \"33420dae-9327-4060-9047-d04b83d925be\" (UID: \"33420dae-9327-4060-9047-d04b83d925be\") " Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.020946 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33420dae-9327-4060-9047-d04b83d925be-kube-api-access-shv58" (OuterVolumeSpecName: "kube-api-access-shv58") pod "33420dae-9327-4060-9047-d04b83d925be" (UID: "33420dae-9327-4060-9047-d04b83d925be"). InnerVolumeSpecName "kube-api-access-shv58". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.090959 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33420dae-9327-4060-9047-d04b83d925be" (UID: "33420dae-9327-4060-9047-d04b83d925be"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.110967 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.111007 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shv58\" (UniqueName: \"kubernetes.io/projected/33420dae-9327-4060-9047-d04b83d925be-kube-api-access-shv58\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.117276 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "33420dae-9327-4060-9047-d04b83d925be" (UID: "33420dae-9327-4060-9047-d04b83d925be"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.149267 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "33420dae-9327-4060-9047-d04b83d925be" (UID: "33420dae-9327-4060-9047-d04b83d925be"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.160038 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-config" (OuterVolumeSpecName: "config") pod "33420dae-9327-4060-9047-d04b83d925be" (UID: "33420dae-9327-4060-9047-d04b83d925be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.213715 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.213758 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.213771 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33420dae-9327-4060-9047-d04b83d925be-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.480902 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77c7dfb8d9-7pqjl"] Oct 04 05:07:07 crc kubenswrapper[4802]: W1004 05:07:07.491604 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod220afca5_a3fb_496f_94b8_9f0123f0393f.slice/crio-45f969258bdc3d2764b6060eecaeec0c7aa10c8e6ec85ca76fbebc4bb82af382 WatchSource:0}: Error finding container 45f969258bdc3d2764b6060eecaeec0c7aa10c8e6ec85ca76fbebc4bb82af382: Status 404 returned error can't find the container with id 45f969258bdc3d2764b6060eecaeec0c7aa10c8e6ec85ca76fbebc4bb82af382 Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.507083 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.538864 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5039c27d-3446-4947-ae13-7cb899a83c71","Type":"ContainerStarted","Data":"defb3fb5d3c680049f0af81ed7d55c8770ceee4531d8f3045846c25075e12384"} Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.540026 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.547100 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8b55b6676-mqh5g" event={"ID":"72327efa-0833-4cf0-bc74-72186ffab61d","Type":"ContainerStarted","Data":"6e90d1518d93c4004b110815414000891842f834b62f02e3e5dc9ed713252cc7"} Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.547155 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8b55b6676-mqh5g" event={"ID":"72327efa-0833-4cf0-bc74-72186ffab61d","Type":"ContainerStarted","Data":"aad0ce4cedcad2607951140a87139e9b6eac1b050f7651e53cc40b494c9496fd"} Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.547167 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8b55b6676-mqh5g" event={"ID":"72327efa-0833-4cf0-bc74-72186ffab61d","Type":"ContainerStarted","Data":"8d9c734f6f8023e1fc28b3a9f25de5476e5c569893c6fdc03f2df4d408a9fe59"} Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.547505 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.549707 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-8rvpx" event={"ID":"33420dae-9327-4060-9047-d04b83d925be","Type":"ContainerDied","Data":"32b95bc1cd68e1519694ad68ca147da72e9c99b45384ba37b196ffe671159aad"} Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.549756 4802 scope.go:117] "RemoveContainer" containerID="cb03c590187a2ad6ebbab475b0e54c0a222e0e4201e11161ff9f1cc545f06c84" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.549916 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-8rvpx" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.565035 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" event={"ID":"c3feeb57-c944-48d9-ac9b-d66991cb5bf4","Type":"ContainerStarted","Data":"a9fe4b9b283d7b6bf3654e51d58d8b3e773664e27da95bc06af772ffd5a95f48"} Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.566026 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.571141 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c7dfb8d9-7pqjl" event={"ID":"220afca5-a3fb-496f-94b8-9f0123f0393f","Type":"ContainerStarted","Data":"45f969258bdc3d2764b6060eecaeec0c7aa10c8e6ec85ca76fbebc4bb82af382"} Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.592104 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.251736217 podStartE2EDuration="10.59208156s" podCreationTimestamp="2025-10-04 05:06:57 +0000 UTC" firstStartedPulling="2025-10-04 05:06:58.14676303 +0000 UTC m=+1260.554763645" lastFinishedPulling="2025-10-04 05:07:06.487108363 +0000 UTC m=+1268.895108988" observedRunningTime="2025-10-04 05:07:07.562301684 +0000 UTC m=+1269.970302319" watchObservedRunningTime="2025-10-04 05:07:07.59208156 +0000 UTC m=+1270.000082185" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.603349 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8b55b6676-mqh5g" podStartSLOduration=4.603328489 podStartE2EDuration="4.603328489s" podCreationTimestamp="2025-10-04 05:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:07:07.584067152 +0000 UTC m=+1269.992067777" watchObservedRunningTime="2025-10-04 05:07:07.603328489 +0000 UTC m=+1270.011329114" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.609891 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" podStartSLOduration=4.609873795 podStartE2EDuration="4.609873795s" podCreationTimestamp="2025-10-04 05:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:07:07.600699514 +0000 UTC m=+1270.008700159" watchObservedRunningTime="2025-10-04 05:07:07.609873795 +0000 UTC m=+1270.017874420" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.713900 4802 scope.go:117] "RemoveContainer" containerID="a1d36be04ca6e504bd705981c737a731da72a9a9d0188a9f2f4b0a46dfd4126d" Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.747321 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-8rvpx"] Oct 04 05:07:07 crc kubenswrapper[4802]: I1004 05:07:07.753618 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-8rvpx"] Oct 04 05:07:08 crc kubenswrapper[4802]: I1004 05:07:08.226207 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-f8c67fc4f-cnjpc" Oct 04 05:07:08 crc kubenswrapper[4802]: I1004 05:07:08.379276 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33420dae-9327-4060-9047-d04b83d925be" path="/var/lib/kubelet/pods/33420dae-9327-4060-9047-d04b83d925be/volumes" Oct 04 05:07:08 crc kubenswrapper[4802]: I1004 05:07:08.580838 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4mftc" event={"ID":"f35c8191-842a-419f-8b4a-6f36bd01f6cd","Type":"ContainerStarted","Data":"e4d187f1cbcaff8b871720ed05eda26a0288b917ddc1af61ef0eb52322bc1c1f"} Oct 04 05:07:08 crc kubenswrapper[4802]: I1004 05:07:08.584143 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c7dfb8d9-7pqjl" event={"ID":"220afca5-a3fb-496f-94b8-9f0123f0393f","Type":"ContainerStarted","Data":"9fd56c68fcd3e24e6c7adc165910756b1602fd3a991e910fbf5fb6b53f1d9fa7"} Oct 04 05:07:08 crc kubenswrapper[4802]: I1004 05:07:08.584193 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c7dfb8d9-7pqjl" event={"ID":"220afca5-a3fb-496f-94b8-9f0123f0393f","Type":"ContainerStarted","Data":"7599963e154ac40287e10e64b14824838c71136956e0b82e3493dfe72d94f7d3"} Oct 04 05:07:08 crc kubenswrapper[4802]: I1004 05:07:08.599340 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4mftc" podStartSLOduration=5.240634156 podStartE2EDuration="36.59932141s" podCreationTimestamp="2025-10-04 05:06:32 +0000 UTC" firstStartedPulling="2025-10-04 05:06:35.152297227 +0000 UTC m=+1237.560297852" lastFinishedPulling="2025-10-04 05:07:06.510984481 +0000 UTC m=+1268.918985106" observedRunningTime="2025-10-04 05:07:08.596037927 +0000 UTC m=+1271.004038552" watchObservedRunningTime="2025-10-04 05:07:08.59932141 +0000 UTC m=+1271.007322035" Oct 04 05:07:08 crc kubenswrapper[4802]: I1004 05:07:08.623719 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77c7dfb8d9-7pqjl" podStartSLOduration=2.6237037819999998 podStartE2EDuration="2.623703782s" podCreationTimestamp="2025-10-04 05:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:07:08.61692707 +0000 UTC m=+1271.024927695" watchObservedRunningTime="2025-10-04 05:07:08.623703782 +0000 UTC m=+1271.031704397" Oct 04 05:07:09 crc kubenswrapper[4802]: I1004 05:07:09.402867 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:09 crc kubenswrapper[4802]: I1004 05:07:09.595375 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:11 crc kubenswrapper[4802]: I1004 05:07:11.941486 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 04 05:07:11 crc kubenswrapper[4802]: E1004 05:07:11.942540 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33420dae-9327-4060-9047-d04b83d925be" containerName="init" Oct 04 05:07:11 crc kubenswrapper[4802]: I1004 05:07:11.942568 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="33420dae-9327-4060-9047-d04b83d925be" containerName="init" Oct 04 05:07:11 crc kubenswrapper[4802]: E1004 05:07:11.942631 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33420dae-9327-4060-9047-d04b83d925be" containerName="dnsmasq-dns" Oct 04 05:07:11 crc kubenswrapper[4802]: I1004 05:07:11.942637 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="33420dae-9327-4060-9047-d04b83d925be" containerName="dnsmasq-dns" Oct 04 05:07:11 crc kubenswrapper[4802]: I1004 05:07:11.942808 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="33420dae-9327-4060-9047-d04b83d925be" containerName="dnsmasq-dns" Oct 04 05:07:11 crc kubenswrapper[4802]: I1004 05:07:11.943428 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 04 05:07:11 crc kubenswrapper[4802]: I1004 05:07:11.945582 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 04 05:07:11 crc kubenswrapper[4802]: I1004 05:07:11.945738 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-zwjd8" Oct 04 05:07:11 crc kubenswrapper[4802]: I1004 05:07:11.950127 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 04 05:07:11 crc kubenswrapper[4802]: I1004 05:07:11.957132 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.096338 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f26f2ddf-5813-4d3b-aa54-302bba18586f-openstack-config\") pod \"openstackclient\" (UID: \"f26f2ddf-5813-4d3b-aa54-302bba18586f\") " pod="openstack/openstackclient" Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.096447 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26f2ddf-5813-4d3b-aa54-302bba18586f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f26f2ddf-5813-4d3b-aa54-302bba18586f\") " pod="openstack/openstackclient" Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.097057 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f26f2ddf-5813-4d3b-aa54-302bba18586f-openstack-config-secret\") pod \"openstackclient\" (UID: \"f26f2ddf-5813-4d3b-aa54-302bba18586f\") " pod="openstack/openstackclient" Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.097167 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd5tj\" (UniqueName: \"kubernetes.io/projected/f26f2ddf-5813-4d3b-aa54-302bba18586f-kube-api-access-qd5tj\") pod \"openstackclient\" (UID: \"f26f2ddf-5813-4d3b-aa54-302bba18586f\") " pod="openstack/openstackclient" Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.198775 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26f2ddf-5813-4d3b-aa54-302bba18586f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f26f2ddf-5813-4d3b-aa54-302bba18586f\") " pod="openstack/openstackclient" Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.198860 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f26f2ddf-5813-4d3b-aa54-302bba18586f-openstack-config-secret\") pod \"openstackclient\" (UID: \"f26f2ddf-5813-4d3b-aa54-302bba18586f\") " pod="openstack/openstackclient" Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.198908 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd5tj\" (UniqueName: \"kubernetes.io/projected/f26f2ddf-5813-4d3b-aa54-302bba18586f-kube-api-access-qd5tj\") pod \"openstackclient\" (UID: \"f26f2ddf-5813-4d3b-aa54-302bba18586f\") " pod="openstack/openstackclient" Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.198983 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f26f2ddf-5813-4d3b-aa54-302bba18586f-openstack-config\") pod \"openstackclient\" (UID: \"f26f2ddf-5813-4d3b-aa54-302bba18586f\") " pod="openstack/openstackclient" Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.199921 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f26f2ddf-5813-4d3b-aa54-302bba18586f-openstack-config\") pod \"openstackclient\" (UID: \"f26f2ddf-5813-4d3b-aa54-302bba18586f\") " pod="openstack/openstackclient" Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.204918 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f26f2ddf-5813-4d3b-aa54-302bba18586f-openstack-config-secret\") pod \"openstackclient\" (UID: \"f26f2ddf-5813-4d3b-aa54-302bba18586f\") " pod="openstack/openstackclient" Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.218771 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26f2ddf-5813-4d3b-aa54-302bba18586f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f26f2ddf-5813-4d3b-aa54-302bba18586f\") " pod="openstack/openstackclient" Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.226116 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd5tj\" (UniqueName: \"kubernetes.io/projected/f26f2ddf-5813-4d3b-aa54-302bba18586f-kube-api-access-qd5tj\") pod \"openstackclient\" (UID: \"f26f2ddf-5813-4d3b-aa54-302bba18586f\") " pod="openstack/openstackclient" Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.263036 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 04 05:07:12 crc kubenswrapper[4802]: I1004 05:07:12.689049 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 04 05:07:13 crc kubenswrapper[4802]: I1004 05:07:13.627128 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f26f2ddf-5813-4d3b-aa54-302bba18586f","Type":"ContainerStarted","Data":"8a29b804f7ebb71d5c04439a7dd4f8ed86e7a7baa1038234b07942572c647112"} Oct 04 05:07:13 crc kubenswrapper[4802]: I1004 05:07:13.630001 4802 generic.go:334] "Generic (PLEG): container finished" podID="f35c8191-842a-419f-8b4a-6f36bd01f6cd" containerID="e4d187f1cbcaff8b871720ed05eda26a0288b917ddc1af61ef0eb52322bc1c1f" exitCode=0 Oct 04 05:07:13 crc kubenswrapper[4802]: I1004 05:07:13.630041 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4mftc" event={"ID":"f35c8191-842a-419f-8b4a-6f36bd01f6cd","Type":"ContainerDied","Data":"e4d187f1cbcaff8b871720ed05eda26a0288b917ddc1af61ef0eb52322bc1c1f"} Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.015764 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.088544 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-n75ls"] Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.088939 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" podUID="85b6b87c-0d6a-4bd3-af19-94f09748a665" containerName="dnsmasq-dns" containerID="cri-o://ae46130066190aeed06bc56a99e237b3e545d1317fa61def3da43e2ac3afcdc4" gracePeriod=10 Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.106041 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.310776 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d4c9d8df8-pp97l" Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.374724 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8446845c4b-xzvk8"] Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.374947 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8446845c4b-xzvk8" podUID="90403c7e-9344-4d4f-a0f1-80797f1cab83" containerName="barbican-api-log" containerID="cri-o://afdaee336383e2eac9cda2584553208a237b59234d55ccab5f93a57b4d6c78fc" gracePeriod=30 Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.375300 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8446845c4b-xzvk8" podUID="90403c7e-9344-4d4f-a0f1-80797f1cab83" containerName="barbican-api" containerID="cri-o://c82b8a8b985dee6ff468be90a4dce68561d04564c9609a3c4c12ebda4ba42f52" gracePeriod=30 Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.653968 4802 generic.go:334] "Generic (PLEG): container finished" podID="90403c7e-9344-4d4f-a0f1-80797f1cab83" containerID="afdaee336383e2eac9cda2584553208a237b59234d55ccab5f93a57b4d6c78fc" exitCode=143 Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.654138 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8446845c4b-xzvk8" event={"ID":"90403c7e-9344-4d4f-a0f1-80797f1cab83","Type":"ContainerDied","Data":"afdaee336383e2eac9cda2584553208a237b59234d55ccab5f93a57b4d6c78fc"} Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.660171 4802 generic.go:334] "Generic (PLEG): container finished" podID="85b6b87c-0d6a-4bd3-af19-94f09748a665" containerID="ae46130066190aeed06bc56a99e237b3e545d1317fa61def3da43e2ac3afcdc4" exitCode=0 Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.661483 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" event={"ID":"85b6b87c-0d6a-4bd3-af19-94f09748a665","Type":"ContainerDied","Data":"ae46130066190aeed06bc56a99e237b3e545d1317fa61def3da43e2ac3afcdc4"} Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.725288 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.748449 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-dns-svc\") pod \"85b6b87c-0d6a-4bd3-af19-94f09748a665\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.748589 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-config\") pod \"85b6b87c-0d6a-4bd3-af19-94f09748a665\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.748618 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-ovsdbserver-nb\") pod \"85b6b87c-0d6a-4bd3-af19-94f09748a665\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.748678 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrx9w\" (UniqueName: \"kubernetes.io/projected/85b6b87c-0d6a-4bd3-af19-94f09748a665-kube-api-access-lrx9w\") pod \"85b6b87c-0d6a-4bd3-af19-94f09748a665\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.748719 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-ovsdbserver-sb\") pod \"85b6b87c-0d6a-4bd3-af19-94f09748a665\" (UID: \"85b6b87c-0d6a-4bd3-af19-94f09748a665\") " Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.776681 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b6b87c-0d6a-4bd3-af19-94f09748a665-kube-api-access-lrx9w" (OuterVolumeSpecName: "kube-api-access-lrx9w") pod "85b6b87c-0d6a-4bd3-af19-94f09748a665" (UID: "85b6b87c-0d6a-4bd3-af19-94f09748a665"). InnerVolumeSpecName "kube-api-access-lrx9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.852592 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrx9w\" (UniqueName: \"kubernetes.io/projected/85b6b87c-0d6a-4bd3-af19-94f09748a665-kube-api-access-lrx9w\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.864405 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "85b6b87c-0d6a-4bd3-af19-94f09748a665" (UID: "85b6b87c-0d6a-4bd3-af19-94f09748a665"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.865162 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "85b6b87c-0d6a-4bd3-af19-94f09748a665" (UID: "85b6b87c-0d6a-4bd3-af19-94f09748a665"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.878197 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-config" (OuterVolumeSpecName: "config") pod "85b6b87c-0d6a-4bd3-af19-94f09748a665" (UID: "85b6b87c-0d6a-4bd3-af19-94f09748a665"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.883457 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "85b6b87c-0d6a-4bd3-af19-94f09748a665" (UID: "85b6b87c-0d6a-4bd3-af19-94f09748a665"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.953748 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.953786 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.953803 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:14 crc kubenswrapper[4802]: I1004 05:07:14.953815 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85b6b87c-0d6a-4bd3-af19-94f09748a665-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.078531 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4mftc" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.159082 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-db-sync-config-data\") pod \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.159150 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35c8191-842a-419f-8b4a-6f36bd01f6cd-etc-machine-id\") pod \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.159174 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-scripts\") pod \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.159219 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-combined-ca-bundle\") pod \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.159286 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f35c8191-842a-419f-8b4a-6f36bd01f6cd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f35c8191-842a-419f-8b4a-6f36bd01f6cd" (UID: "f35c8191-842a-419f-8b4a-6f36bd01f6cd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.159597 4802 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f35c8191-842a-419f-8b4a-6f36bd01f6cd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.164822 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-scripts" (OuterVolumeSpecName: "scripts") pod "f35c8191-842a-419f-8b4a-6f36bd01f6cd" (UID: "f35c8191-842a-419f-8b4a-6f36bd01f6cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.165221 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f35c8191-842a-419f-8b4a-6f36bd01f6cd" (UID: "f35c8191-842a-419f-8b4a-6f36bd01f6cd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.187365 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f35c8191-842a-419f-8b4a-6f36bd01f6cd" (UID: "f35c8191-842a-419f-8b4a-6f36bd01f6cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.260535 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-config-data\") pod \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.260612 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghdwd\" (UniqueName: \"kubernetes.io/projected/f35c8191-842a-419f-8b4a-6f36bd01f6cd-kube-api-access-ghdwd\") pod \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\" (UID: \"f35c8191-842a-419f-8b4a-6f36bd01f6cd\") " Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.261095 4802 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.261110 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.261120 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.266243 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f35c8191-842a-419f-8b4a-6f36bd01f6cd-kube-api-access-ghdwd" (OuterVolumeSpecName: "kube-api-access-ghdwd") pod "f35c8191-842a-419f-8b4a-6f36bd01f6cd" (UID: "f35c8191-842a-419f-8b4a-6f36bd01f6cd"). InnerVolumeSpecName "kube-api-access-ghdwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.321683 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-config-data" (OuterVolumeSpecName: "config-data") pod "f35c8191-842a-419f-8b4a-6f36bd01f6cd" (UID: "f35c8191-842a-419f-8b4a-6f36bd01f6cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.363017 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35c8191-842a-419f-8b4a-6f36bd01f6cd-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.363053 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghdwd\" (UniqueName: \"kubernetes.io/projected/f35c8191-842a-419f-8b4a-6f36bd01f6cd-kube-api-access-ghdwd\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.683248 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4mftc" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.684997 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4mftc" event={"ID":"f35c8191-842a-419f-8b4a-6f36bd01f6cd","Type":"ContainerDied","Data":"7e0075814ee060f7c4444fccf75d70a83559fe169e7f31c020893dccd57076ed"} Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.685051 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e0075814ee060f7c4444fccf75d70a83559fe169e7f31c020893dccd57076ed" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.687057 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" event={"ID":"85b6b87c-0d6a-4bd3-af19-94f09748a665","Type":"ContainerDied","Data":"66b7e17ad91e5199f04d5db18b8aeca2a8c32092a717de23185f14876571f740"} Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.687094 4802 scope.go:117] "RemoveContainer" containerID="ae46130066190aeed06bc56a99e237b3e545d1317fa61def3da43e2ac3afcdc4" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.687111 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-n75ls" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.711306 4802 scope.go:117] "RemoveContainer" containerID="ad161aae239278648b0613e04bdaa25adc2804fdb4d27d663abc795a8dbc44e6" Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.733536 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-n75ls"] Oct 04 05:07:15 crc kubenswrapper[4802]: I1004 05:07:15.743796 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-n75ls"] Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.036909 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 04 05:07:16 crc kubenswrapper[4802]: E1004 05:07:16.037272 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b6b87c-0d6a-4bd3-af19-94f09748a665" containerName="init" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.037292 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b6b87c-0d6a-4bd3-af19-94f09748a665" containerName="init" Oct 04 05:07:16 crc kubenswrapper[4802]: E1004 05:07:16.037321 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35c8191-842a-419f-8b4a-6f36bd01f6cd" containerName="cinder-db-sync" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.037327 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35c8191-842a-419f-8b4a-6f36bd01f6cd" containerName="cinder-db-sync" Oct 04 05:07:16 crc kubenswrapper[4802]: E1004 05:07:16.037342 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b6b87c-0d6a-4bd3-af19-94f09748a665" containerName="dnsmasq-dns" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.037349 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b6b87c-0d6a-4bd3-af19-94f09748a665" containerName="dnsmasq-dns" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.037534 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b6b87c-0d6a-4bd3-af19-94f09748a665" containerName="dnsmasq-dns" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.037558 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35c8191-842a-419f-8b4a-6f36bd01f6cd" containerName="cinder-db-sync" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.053028 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.053133 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.057772 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.057941 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-w54h4" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.059015 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.059192 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.078130 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.078209 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.078241 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-config-data\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.078277 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct6v4\" (UniqueName: \"kubernetes.io/projected/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-kube-api-access-ct6v4\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.078370 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-scripts\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.078401 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.132348 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-h9t6h"] Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.133797 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.152944 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-h9t6h"] Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.181608 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.181667 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-config-data\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.181691 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct6v4\" (UniqueName: \"kubernetes.io/projected/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-kube-api-access-ct6v4\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.181745 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-scripts\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.181771 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.181835 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.182002 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.193592 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.196612 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-scripts\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.197997 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-config-data\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.210270 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.237279 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct6v4\" (UniqueName: \"kubernetes.io/projected/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-kube-api-access-ct6v4\") pod \"cinder-scheduler-0\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.283116 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcgtn\" (UniqueName: \"kubernetes.io/projected/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-kube-api-access-jcgtn\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.283242 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.283280 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.283301 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-config\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.283385 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.390898 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.391055 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcgtn\" (UniqueName: \"kubernetes.io/projected/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-kube-api-access-jcgtn\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.391300 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.391341 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.391511 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-config\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.396337 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.396701 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.398528 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.399958 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-config\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.402118 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85b6b87c-0d6a-4bd3-af19-94f09748a665" path="/var/lib/kubelet/pods/85b6b87c-0d6a-4bd3-af19-94f09748a665/volumes" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.418261 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.448222 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.476449 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcgtn\" (UniqueName: \"kubernetes.io/projected/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-kube-api-access-jcgtn\") pod \"dnsmasq-dns-6d97fcdd8f-h9t6h\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.484487 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.484813 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.500899 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.588925 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.597976 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48rnb\" (UniqueName: \"kubernetes.io/projected/541768b2-0585-41dd-9321-cc5b28a3053b-kube-api-access-48rnb\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.598054 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-scripts\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.598099 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/541768b2-0585-41dd-9321-cc5b28a3053b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.598151 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.598264 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541768b2-0585-41dd-9321-cc5b28a3053b-logs\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.598380 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-config-data-custom\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.598493 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-config-data\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.701016 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-scripts\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.701470 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/541768b2-0585-41dd-9321-cc5b28a3053b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.701562 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.701682 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541768b2-0585-41dd-9321-cc5b28a3053b-logs\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.701765 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-config-data-custom\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.701802 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-config-data\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.701920 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48rnb\" (UniqueName: \"kubernetes.io/projected/541768b2-0585-41dd-9321-cc5b28a3053b-kube-api-access-48rnb\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.703218 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541768b2-0585-41dd-9321-cc5b28a3053b-logs\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.703279 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/541768b2-0585-41dd-9321-cc5b28a3053b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.711343 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.715912 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-config-data\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.716232 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-config-data-custom\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.716634 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-scripts\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.734266 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48rnb\" (UniqueName: \"kubernetes.io/projected/541768b2-0585-41dd-9321-cc5b28a3053b-kube-api-access-48rnb\") pod \"cinder-api-0\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " pod="openstack/cinder-api-0" Oct 04 05:07:16 crc kubenswrapper[4802]: I1004 05:07:16.879205 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 04 05:07:17 crc kubenswrapper[4802]: I1004 05:07:17.046938 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 04 05:07:17 crc kubenswrapper[4802]: W1004 05:07:17.058188 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe18510a_9ff5_431e_8f89_be3ede5a5bd4.slice/crio-f120e24c84a937ef9079ccb3322828cf2a0fbf9f35ee6d844aae6acda541620c WatchSource:0}: Error finding container f120e24c84a937ef9079ccb3322828cf2a0fbf9f35ee6d844aae6acda541620c: Status 404 returned error can't find the container with id f120e24c84a937ef9079ccb3322828cf2a0fbf9f35ee6d844aae6acda541620c Oct 04 05:07:17 crc kubenswrapper[4802]: I1004 05:07:17.167199 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-h9t6h"] Oct 04 05:07:17 crc kubenswrapper[4802]: I1004 05:07:17.392572 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 04 05:07:17 crc kubenswrapper[4802]: I1004 05:07:17.587394 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8446845c4b-xzvk8" podUID="90403c7e-9344-4d4f-a0f1-80797f1cab83" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": read tcp 10.217.0.2:38986->10.217.0.145:9311: read: connection reset by peer" Oct 04 05:07:17 crc kubenswrapper[4802]: I1004 05:07:17.587750 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-8446845c4b-xzvk8" podUID="90403c7e-9344-4d4f-a0f1-80797f1cab83" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": read tcp 10.217.0.2:38996->10.217.0.145:9311: read: connection reset by peer" Oct 04 05:07:17 crc kubenswrapper[4802]: I1004 05:07:17.743517 4802 generic.go:334] "Generic (PLEG): container finished" podID="fd505bc4-f9fa-4e50-a094-8f46c2d592a0" containerID="2c19364b47012b70d2f16c300cd4116d19e134e4262ef419fc88cb8c878ad046" exitCode=0 Oct 04 05:07:17 crc kubenswrapper[4802]: I1004 05:07:17.743880 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" event={"ID":"fd505bc4-f9fa-4e50-a094-8f46c2d592a0","Type":"ContainerDied","Data":"2c19364b47012b70d2f16c300cd4116d19e134e4262ef419fc88cb8c878ad046"} Oct 04 05:07:17 crc kubenswrapper[4802]: I1004 05:07:17.743907 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" event={"ID":"fd505bc4-f9fa-4e50-a094-8f46c2d592a0","Type":"ContainerStarted","Data":"73cbacb952f6c6a846a7aabb88d7320c54b298766f6b68c3f22a827474ed7330"} Oct 04 05:07:17 crc kubenswrapper[4802]: I1004 05:07:17.748932 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fe18510a-9ff5-431e-8f89-be3ede5a5bd4","Type":"ContainerStarted","Data":"f120e24c84a937ef9079ccb3322828cf2a0fbf9f35ee6d844aae6acda541620c"} Oct 04 05:07:17 crc kubenswrapper[4802]: I1004 05:07:17.756274 4802 generic.go:334] "Generic (PLEG): container finished" podID="90403c7e-9344-4d4f-a0f1-80797f1cab83" containerID="c82b8a8b985dee6ff468be90a4dce68561d04564c9609a3c4c12ebda4ba42f52" exitCode=0 Oct 04 05:07:17 crc kubenswrapper[4802]: I1004 05:07:17.756360 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8446845c4b-xzvk8" event={"ID":"90403c7e-9344-4d4f-a0f1-80797f1cab83","Type":"ContainerDied","Data":"c82b8a8b985dee6ff468be90a4dce68561d04564c9609a3c4c12ebda4ba42f52"} Oct 04 05:07:17 crc kubenswrapper[4802]: I1004 05:07:17.763671 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"541768b2-0585-41dd-9321-cc5b28a3053b","Type":"ContainerStarted","Data":"d7f28212535776366c57c27dc3c7a67897bd5789089f10f89b0fee5e4e15221b"} Oct 04 05:07:17 crc kubenswrapper[4802]: I1004 05:07:17.912253 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.130146 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.250666 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-config-data-custom\") pod \"90403c7e-9344-4d4f-a0f1-80797f1cab83\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.250942 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-combined-ca-bundle\") pod \"90403c7e-9344-4d4f-a0f1-80797f1cab83\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.251033 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-config-data\") pod \"90403c7e-9344-4d4f-a0f1-80797f1cab83\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.251151 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8vkp\" (UniqueName: \"kubernetes.io/projected/90403c7e-9344-4d4f-a0f1-80797f1cab83-kube-api-access-k8vkp\") pod \"90403c7e-9344-4d4f-a0f1-80797f1cab83\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.251177 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90403c7e-9344-4d4f-a0f1-80797f1cab83-logs\") pod \"90403c7e-9344-4d4f-a0f1-80797f1cab83\" (UID: \"90403c7e-9344-4d4f-a0f1-80797f1cab83\") " Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.252144 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90403c7e-9344-4d4f-a0f1-80797f1cab83-logs" (OuterVolumeSpecName: "logs") pod "90403c7e-9344-4d4f-a0f1-80797f1cab83" (UID: "90403c7e-9344-4d4f-a0f1-80797f1cab83"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.265821 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "90403c7e-9344-4d4f-a0f1-80797f1cab83" (UID: "90403c7e-9344-4d4f-a0f1-80797f1cab83"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.265867 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90403c7e-9344-4d4f-a0f1-80797f1cab83-kube-api-access-k8vkp" (OuterVolumeSpecName: "kube-api-access-k8vkp") pod "90403c7e-9344-4d4f-a0f1-80797f1cab83" (UID: "90403c7e-9344-4d4f-a0f1-80797f1cab83"). InnerVolumeSpecName "kube-api-access-k8vkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.292297 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90403c7e-9344-4d4f-a0f1-80797f1cab83" (UID: "90403c7e-9344-4d4f-a0f1-80797f1cab83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.316216 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-config-data" (OuterVolumeSpecName: "config-data") pod "90403c7e-9344-4d4f-a0f1-80797f1cab83" (UID: "90403c7e-9344-4d4f-a0f1-80797f1cab83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.352235 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8vkp\" (UniqueName: \"kubernetes.io/projected/90403c7e-9344-4d4f-a0f1-80797f1cab83-kube-api-access-k8vkp\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.352262 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90403c7e-9344-4d4f-a0f1-80797f1cab83-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.352273 4802 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.352281 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.352289 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90403c7e-9344-4d4f-a0f1-80797f1cab83-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.803243 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" event={"ID":"fd505bc4-f9fa-4e50-a094-8f46c2d592a0","Type":"ContainerStarted","Data":"956b5524c78e24043e783a3ca1515db2e34bce62446e593a359eb9c4edc63d29"} Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.805796 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.819780 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8446845c4b-xzvk8" event={"ID":"90403c7e-9344-4d4f-a0f1-80797f1cab83","Type":"ContainerDied","Data":"7983588d55ac16d7b707017c29a03079d153ea81e2a1db8a259e887b4141e0d5"} Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.819837 4802 scope.go:117] "RemoveContainer" containerID="c82b8a8b985dee6ff468be90a4dce68561d04564c9609a3c4c12ebda4ba42f52" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.820010 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8446845c4b-xzvk8" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.826020 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"541768b2-0585-41dd-9321-cc5b28a3053b","Type":"ContainerStarted","Data":"a02087030babb55b0180bed7d7abd0610837690bb3c926df0131f6f0fce452fd"} Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.836079 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" podStartSLOduration=2.836059017 podStartE2EDuration="2.836059017s" podCreationTimestamp="2025-10-04 05:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:07:18.832077944 +0000 UTC m=+1281.240078589" watchObservedRunningTime="2025-10-04 05:07:18.836059017 +0000 UTC m=+1281.244059642" Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.857613 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8446845c4b-xzvk8"] Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.864565 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-8446845c4b-xzvk8"] Oct 04 05:07:18 crc kubenswrapper[4802]: I1004 05:07:18.866836 4802 scope.go:117] "RemoveContainer" containerID="afdaee336383e2eac9cda2584553208a237b59234d55ccab5f93a57b4d6c78fc" Oct 04 05:07:19 crc kubenswrapper[4802]: I1004 05:07:19.833912 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fe18510a-9ff5-431e-8f89-be3ede5a5bd4","Type":"ContainerStarted","Data":"d5c88f26e70e3632dba82ef5e30a6f53328e6b77ff37efbb3098a32bc9027002"} Oct 04 05:07:19 crc kubenswrapper[4802]: I1004 05:07:19.834250 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fe18510a-9ff5-431e-8f89-be3ede5a5bd4","Type":"ContainerStarted","Data":"7c76ea5cc3c8c7fc28722b9186013b3c0b1dfe413a462539747bc8d12eb8197d"} Oct 04 05:07:19 crc kubenswrapper[4802]: I1004 05:07:19.838866 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="541768b2-0585-41dd-9321-cc5b28a3053b" containerName="cinder-api-log" containerID="cri-o://a02087030babb55b0180bed7d7abd0610837690bb3c926df0131f6f0fce452fd" gracePeriod=30 Oct 04 05:07:19 crc kubenswrapper[4802]: I1004 05:07:19.839255 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"541768b2-0585-41dd-9321-cc5b28a3053b","Type":"ContainerStarted","Data":"4f3893a12e535b523b35967ab1357d6ba18a8aba70042d322aab537368ba05b7"} Oct 04 05:07:19 crc kubenswrapper[4802]: I1004 05:07:19.839330 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 04 05:07:19 crc kubenswrapper[4802]: I1004 05:07:19.839424 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="541768b2-0585-41dd-9321-cc5b28a3053b" containerName="cinder-api" containerID="cri-o://4f3893a12e535b523b35967ab1357d6ba18a8aba70042d322aab537368ba05b7" gracePeriod=30 Oct 04 05:07:19 crc kubenswrapper[4802]: I1004 05:07:19.862465 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.76154888 podStartE2EDuration="4.862444661s" podCreationTimestamp="2025-10-04 05:07:15 +0000 UTC" firstStartedPulling="2025-10-04 05:07:17.061041078 +0000 UTC m=+1279.469041713" lastFinishedPulling="2025-10-04 05:07:18.161936879 +0000 UTC m=+1280.569937494" observedRunningTime="2025-10-04 05:07:19.854414123 +0000 UTC m=+1282.262414748" watchObservedRunningTime="2025-10-04 05:07:19.862444661 +0000 UTC m=+1282.270445286" Oct 04 05:07:19 crc kubenswrapper[4802]: I1004 05:07:19.880191 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.880172114 podStartE2EDuration="3.880172114s" podCreationTimestamp="2025-10-04 05:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:07:19.872772944 +0000 UTC m=+1282.280773569" watchObservedRunningTime="2025-10-04 05:07:19.880172114 +0000 UTC m=+1282.288172739" Oct 04 05:07:20 crc kubenswrapper[4802]: I1004 05:07:20.372533 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90403c7e-9344-4d4f-a0f1-80797f1cab83" path="/var/lib/kubelet/pods/90403c7e-9344-4d4f-a0f1-80797f1cab83/volumes" Oct 04 05:07:20 crc kubenswrapper[4802]: I1004 05:07:20.849779 4802 generic.go:334] "Generic (PLEG): container finished" podID="541768b2-0585-41dd-9321-cc5b28a3053b" containerID="4f3893a12e535b523b35967ab1357d6ba18a8aba70042d322aab537368ba05b7" exitCode=0 Oct 04 05:07:20 crc kubenswrapper[4802]: I1004 05:07:20.849820 4802 generic.go:334] "Generic (PLEG): container finished" podID="541768b2-0585-41dd-9321-cc5b28a3053b" containerID="a02087030babb55b0180bed7d7abd0610837690bb3c926df0131f6f0fce452fd" exitCode=143 Oct 04 05:07:20 crc kubenswrapper[4802]: I1004 05:07:20.849896 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"541768b2-0585-41dd-9321-cc5b28a3053b","Type":"ContainerDied","Data":"4f3893a12e535b523b35967ab1357d6ba18a8aba70042d322aab537368ba05b7"} Oct 04 05:07:20 crc kubenswrapper[4802]: I1004 05:07:20.849977 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"541768b2-0585-41dd-9321-cc5b28a3053b","Type":"ContainerDied","Data":"a02087030babb55b0180bed7d7abd0610837690bb3c926df0131f6f0fce452fd"} Oct 04 05:07:21 crc kubenswrapper[4802]: I1004 05:07:21.419448 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 04 05:07:21 crc kubenswrapper[4802]: I1004 05:07:21.446182 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:21 crc kubenswrapper[4802]: I1004 05:07:21.446425 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="ceilometer-central-agent" containerID="cri-o://7d203a3fb8c5895bf131c4ec74184353486aa9a470cba3ded4cada90689dde01" gracePeriod=30 Oct 04 05:07:21 crc kubenswrapper[4802]: I1004 05:07:21.446463 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="sg-core" containerID="cri-o://5e6f675210f2c7e5c66e6b93525b15b615bc9b84c7e973d543604550e82acc0a" gracePeriod=30 Oct 04 05:07:21 crc kubenswrapper[4802]: I1004 05:07:21.446534 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="ceilometer-notification-agent" containerID="cri-o://9b52750d4d5be522242a54f35a91878568173e4ea0ea28bcccf1fc3b5c689179" gracePeriod=30 Oct 04 05:07:21 crc kubenswrapper[4802]: I1004 05:07:21.446538 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="proxy-httpd" containerID="cri-o://defb3fb5d3c680049f0af81ed7d55c8770ceee4531d8f3045846c25075e12384" gracePeriod=30 Oct 04 05:07:21 crc kubenswrapper[4802]: I1004 05:07:21.455631 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.141:3000/\": EOF" Oct 04 05:07:21 crc kubenswrapper[4802]: I1004 05:07:21.863882 4802 generic.go:334] "Generic (PLEG): container finished" podID="5039c27d-3446-4947-ae13-7cb899a83c71" containerID="defb3fb5d3c680049f0af81ed7d55c8770ceee4531d8f3045846c25075e12384" exitCode=0 Oct 04 05:07:21 crc kubenswrapper[4802]: I1004 05:07:21.864174 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5039c27d-3446-4947-ae13-7cb899a83c71","Type":"ContainerDied","Data":"defb3fb5d3c680049f0af81ed7d55c8770ceee4531d8f3045846c25075e12384"} Oct 04 05:07:21 crc kubenswrapper[4802]: I1004 05:07:21.864240 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5039c27d-3446-4947-ae13-7cb899a83c71","Type":"ContainerDied","Data":"5e6f675210f2c7e5c66e6b93525b15b615bc9b84c7e973d543604550e82acc0a"} Oct 04 05:07:21 crc kubenswrapper[4802]: I1004 05:07:21.864192 4802 generic.go:334] "Generic (PLEG): container finished" podID="5039c27d-3446-4947-ae13-7cb899a83c71" containerID="5e6f675210f2c7e5c66e6b93525b15b615bc9b84c7e973d543604550e82acc0a" exitCode=2 Oct 04 05:07:22 crc kubenswrapper[4802]: I1004 05:07:22.662946 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:07:22 crc kubenswrapper[4802]: I1004 05:07:22.663005 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:07:22 crc kubenswrapper[4802]: I1004 05:07:22.883147 4802 generic.go:334] "Generic (PLEG): container finished" podID="5039c27d-3446-4947-ae13-7cb899a83c71" containerID="7d203a3fb8c5895bf131c4ec74184353486aa9a470cba3ded4cada90689dde01" exitCode=0 Oct 04 05:07:22 crc kubenswrapper[4802]: I1004 05:07:22.883195 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5039c27d-3446-4947-ae13-7cb899a83c71","Type":"ContainerDied","Data":"7d203a3fb8c5895bf131c4ec74184353486aa9a470cba3ded4cada90689dde01"} Oct 04 05:07:23 crc kubenswrapper[4802]: I1004 05:07:23.903741 4802 generic.go:334] "Generic (PLEG): container finished" podID="5039c27d-3446-4947-ae13-7cb899a83c71" containerID="9b52750d4d5be522242a54f35a91878568173e4ea0ea28bcccf1fc3b5c689179" exitCode=0 Oct 04 05:07:23 crc kubenswrapper[4802]: I1004 05:07:23.903825 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5039c27d-3446-4947-ae13-7cb899a83c71","Type":"ContainerDied","Data":"9b52750d4d5be522242a54f35a91878568173e4ea0ea28bcccf1fc3b5c689179"} Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.405899 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-zxhcq"] Oct 04 05:07:24 crc kubenswrapper[4802]: E1004 05:07:24.406336 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90403c7e-9344-4d4f-a0f1-80797f1cab83" containerName="barbican-api-log" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.406357 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="90403c7e-9344-4d4f-a0f1-80797f1cab83" containerName="barbican-api-log" Oct 04 05:07:24 crc kubenswrapper[4802]: E1004 05:07:24.406407 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90403c7e-9344-4d4f-a0f1-80797f1cab83" containerName="barbican-api" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.406416 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="90403c7e-9344-4d4f-a0f1-80797f1cab83" containerName="barbican-api" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.406615 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="90403c7e-9344-4d4f-a0f1-80797f1cab83" containerName="barbican-api" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.406657 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="90403c7e-9344-4d4f-a0f1-80797f1cab83" containerName="barbican-api-log" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.407382 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zxhcq" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.445778 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zxhcq"] Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.493499 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmx7z\" (UniqueName: \"kubernetes.io/projected/342d0d97-3dda-4dde-b14c-1b7465e68e0b-kube-api-access-mmx7z\") pod \"nova-api-db-create-zxhcq\" (UID: \"342d0d97-3dda-4dde-b14c-1b7465e68e0b\") " pod="openstack/nova-api-db-create-zxhcq" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.495194 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-wfcjk"] Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.496535 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wfcjk" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.505436 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wfcjk"] Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.595171 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8blp\" (UniqueName: \"kubernetes.io/projected/6005e25a-ff70-4893-864d-f17cd5715536-kube-api-access-n8blp\") pod \"nova-cell0-db-create-wfcjk\" (UID: \"6005e25a-ff70-4893-864d-f17cd5715536\") " pod="openstack/nova-cell0-db-create-wfcjk" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.595480 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmx7z\" (UniqueName: \"kubernetes.io/projected/342d0d97-3dda-4dde-b14c-1b7465e68e0b-kube-api-access-mmx7z\") pod \"nova-api-db-create-zxhcq\" (UID: \"342d0d97-3dda-4dde-b14c-1b7465e68e0b\") " pod="openstack/nova-api-db-create-zxhcq" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.617844 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmx7z\" (UniqueName: \"kubernetes.io/projected/342d0d97-3dda-4dde-b14c-1b7465e68e0b-kube-api-access-mmx7z\") pod \"nova-api-db-create-zxhcq\" (UID: \"342d0d97-3dda-4dde-b14c-1b7465e68e0b\") " pod="openstack/nova-api-db-create-zxhcq" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.698078 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8blp\" (UniqueName: \"kubernetes.io/projected/6005e25a-ff70-4893-864d-f17cd5715536-kube-api-access-n8blp\") pod \"nova-cell0-db-create-wfcjk\" (UID: \"6005e25a-ff70-4893-864d-f17cd5715536\") " pod="openstack/nova-cell0-db-create-wfcjk" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.698370 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-g6l87"] Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.699346 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g6l87" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.722699 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-g6l87"] Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.727564 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zxhcq" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.739131 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8blp\" (UniqueName: \"kubernetes.io/projected/6005e25a-ff70-4893-864d-f17cd5715536-kube-api-access-n8blp\") pod \"nova-cell0-db-create-wfcjk\" (UID: \"6005e25a-ff70-4893-864d-f17cd5715536\") " pod="openstack/nova-cell0-db-create-wfcjk" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.799993 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9lfs\" (UniqueName: \"kubernetes.io/projected/4cd1006b-e193-4f3b-b81f-c0147c185ee5-kube-api-access-j9lfs\") pod \"nova-cell1-db-create-g6l87\" (UID: \"4cd1006b-e193-4f3b-b81f-c0147c185ee5\") " pod="openstack/nova-cell1-db-create-g6l87" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.824051 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wfcjk" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.902421 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9lfs\" (UniqueName: \"kubernetes.io/projected/4cd1006b-e193-4f3b-b81f-c0147c185ee5-kube-api-access-j9lfs\") pod \"nova-cell1-db-create-g6l87\" (UID: \"4cd1006b-e193-4f3b-b81f-c0147c185ee5\") " pod="openstack/nova-cell1-db-create-g6l87" Oct 04 05:07:24 crc kubenswrapper[4802]: I1004 05:07:24.923141 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9lfs\" (UniqueName: \"kubernetes.io/projected/4cd1006b-e193-4f3b-b81f-c0147c185ee5-kube-api-access-j9lfs\") pod \"nova-cell1-db-create-g6l87\" (UID: \"4cd1006b-e193-4f3b-b81f-c0147c185ee5\") " pod="openstack/nova-cell1-db-create-g6l87" Oct 04 05:07:25 crc kubenswrapper[4802]: I1004 05:07:25.073283 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g6l87" Oct 04 05:07:25 crc kubenswrapper[4802]: I1004 05:07:25.924820 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5039c27d-3446-4947-ae13-7cb899a83c71","Type":"ContainerDied","Data":"bf8b348cba0ef712c5917c6d8041f86528e570a277f1c3f08f96cd9db5064eb1"} Oct 04 05:07:25 crc kubenswrapper[4802]: I1004 05:07:25.925363 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf8b348cba0ef712c5917c6d8041f86528e570a277f1c3f08f96cd9db5064eb1" Oct 04 05:07:25 crc kubenswrapper[4802]: I1004 05:07:25.926404 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f26f2ddf-5813-4d3b-aa54-302bba18586f","Type":"ContainerStarted","Data":"825c60cda37fffbb812b8aa8efdd27cbc22058da545880d73ad70632113af400"} Oct 04 05:07:25 crc kubenswrapper[4802]: I1004 05:07:25.931674 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"541768b2-0585-41dd-9321-cc5b28a3053b","Type":"ContainerDied","Data":"d7f28212535776366c57c27dc3c7a67897bd5789089f10f89b0fee5e4e15221b"} Oct 04 05:07:25 crc kubenswrapper[4802]: I1004 05:07:25.931713 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7f28212535776366c57c27dc3c7a67897bd5789089f10f89b0fee5e4e15221b" Oct 04 05:07:25 crc kubenswrapper[4802]: I1004 05:07:25.950135 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.138256605 podStartE2EDuration="14.950091745s" podCreationTimestamp="2025-10-04 05:07:11 +0000 UTC" firstStartedPulling="2025-10-04 05:07:12.698398001 +0000 UTC m=+1275.106398626" lastFinishedPulling="2025-10-04 05:07:25.510233141 +0000 UTC m=+1287.918233766" observedRunningTime="2025-10-04 05:07:25.941397678 +0000 UTC m=+1288.349398313" watchObservedRunningTime="2025-10-04 05:07:25.950091745 +0000 UTC m=+1288.358092380" Oct 04 05:07:25 crc kubenswrapper[4802]: I1004 05:07:25.975192 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 04 05:07:25 crc kubenswrapper[4802]: I1004 05:07:25.985517 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019005 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-config-data\") pod \"541768b2-0585-41dd-9321-cc5b28a3053b\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019053 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c27d-3446-4947-ae13-7cb899a83c71-log-httpd\") pod \"5039c27d-3446-4947-ae13-7cb899a83c71\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019099 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c27d-3446-4947-ae13-7cb899a83c71-run-httpd\") pod \"5039c27d-3446-4947-ae13-7cb899a83c71\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019118 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541768b2-0585-41dd-9321-cc5b28a3053b-logs\") pod \"541768b2-0585-41dd-9321-cc5b28a3053b\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019147 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-config-data-custom\") pod \"541768b2-0585-41dd-9321-cc5b28a3053b\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019169 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-combined-ca-bundle\") pod \"5039c27d-3446-4947-ae13-7cb899a83c71\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019198 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-sg-core-conf-yaml\") pod \"5039c27d-3446-4947-ae13-7cb899a83c71\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019239 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-combined-ca-bundle\") pod \"541768b2-0585-41dd-9321-cc5b28a3053b\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019316 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/541768b2-0585-41dd-9321-cc5b28a3053b-etc-machine-id\") pod \"541768b2-0585-41dd-9321-cc5b28a3053b\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019369 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48rnb\" (UniqueName: \"kubernetes.io/projected/541768b2-0585-41dd-9321-cc5b28a3053b-kube-api-access-48rnb\") pod \"541768b2-0585-41dd-9321-cc5b28a3053b\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019428 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-config-data\") pod \"5039c27d-3446-4947-ae13-7cb899a83c71\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019462 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-scripts\") pod \"5039c27d-3446-4947-ae13-7cb899a83c71\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019482 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp256\" (UniqueName: \"kubernetes.io/projected/5039c27d-3446-4947-ae13-7cb899a83c71-kube-api-access-kp256\") pod \"5039c27d-3446-4947-ae13-7cb899a83c71\" (UID: \"5039c27d-3446-4947-ae13-7cb899a83c71\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.019518 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-scripts\") pod \"541768b2-0585-41dd-9321-cc5b28a3053b\" (UID: \"541768b2-0585-41dd-9321-cc5b28a3053b\") " Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.020202 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/541768b2-0585-41dd-9321-cc5b28a3053b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "541768b2-0585-41dd-9321-cc5b28a3053b" (UID: "541768b2-0585-41dd-9321-cc5b28a3053b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.020746 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/541768b2-0585-41dd-9321-cc5b28a3053b-logs" (OuterVolumeSpecName: "logs") pod "541768b2-0585-41dd-9321-cc5b28a3053b" (UID: "541768b2-0585-41dd-9321-cc5b28a3053b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.024080 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5039c27d-3446-4947-ae13-7cb899a83c71-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5039c27d-3446-4947-ae13-7cb899a83c71" (UID: "5039c27d-3446-4947-ae13-7cb899a83c71"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.024485 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5039c27d-3446-4947-ae13-7cb899a83c71-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5039c27d-3446-4947-ae13-7cb899a83c71" (UID: "5039c27d-3446-4947-ae13-7cb899a83c71"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.039791 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-scripts" (OuterVolumeSpecName: "scripts") pod "5039c27d-3446-4947-ae13-7cb899a83c71" (UID: "5039c27d-3446-4947-ae13-7cb899a83c71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.039834 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5039c27d-3446-4947-ae13-7cb899a83c71-kube-api-access-kp256" (OuterVolumeSpecName: "kube-api-access-kp256") pod "5039c27d-3446-4947-ae13-7cb899a83c71" (UID: "5039c27d-3446-4947-ae13-7cb899a83c71"). InnerVolumeSpecName "kube-api-access-kp256". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.039975 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "541768b2-0585-41dd-9321-cc5b28a3053b" (UID: "541768b2-0585-41dd-9321-cc5b28a3053b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.040905 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-scripts" (OuterVolumeSpecName: "scripts") pod "541768b2-0585-41dd-9321-cc5b28a3053b" (UID: "541768b2-0585-41dd-9321-cc5b28a3053b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.042938 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541768b2-0585-41dd-9321-cc5b28a3053b-kube-api-access-48rnb" (OuterVolumeSpecName: "kube-api-access-48rnb") pod "541768b2-0585-41dd-9321-cc5b28a3053b" (UID: "541768b2-0585-41dd-9321-cc5b28a3053b"). InnerVolumeSpecName "kube-api-access-48rnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.076707 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wfcjk"] Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.076790 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5039c27d-3446-4947-ae13-7cb899a83c71" (UID: "5039c27d-3446-4947-ae13-7cb899a83c71"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.089919 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "541768b2-0585-41dd-9321-cc5b28a3053b" (UID: "541768b2-0585-41dd-9321-cc5b28a3053b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.121328 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.121448 4802 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c27d-3446-4947-ae13-7cb899a83c71-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.121461 4802 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5039c27d-3446-4947-ae13-7cb899a83c71-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.121472 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541768b2-0585-41dd-9321-cc5b28a3053b-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.121483 4802 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.121495 4802 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.121505 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.121517 4802 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/541768b2-0585-41dd-9321-cc5b28a3053b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.121562 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48rnb\" (UniqueName: \"kubernetes.io/projected/541768b2-0585-41dd-9321-cc5b28a3053b-kube-api-access-48rnb\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.121573 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.121583 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp256\" (UniqueName: \"kubernetes.io/projected/5039c27d-3446-4947-ae13-7cb899a83c71-kube-api-access-kp256\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.123854 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-config-data" (OuterVolumeSpecName: "config-data") pod "541768b2-0585-41dd-9321-cc5b28a3053b" (UID: "541768b2-0585-41dd-9321-cc5b28a3053b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.144548 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5039c27d-3446-4947-ae13-7cb899a83c71" (UID: "5039c27d-3446-4947-ae13-7cb899a83c71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.162496 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-config-data" (OuterVolumeSpecName: "config-data") pod "5039c27d-3446-4947-ae13-7cb899a83c71" (UID: "5039c27d-3446-4947-ae13-7cb899a83c71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.224007 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.225057 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541768b2-0585-41dd-9321-cc5b28a3053b-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.225084 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5039c27d-3446-4947-ae13-7cb899a83c71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.273988 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zxhcq"] Oct 04 05:07:26 crc kubenswrapper[4802]: W1004 05:07:26.276037 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cd1006b_e193_4f3b_b81f_c0147c185ee5.slice/crio-ae67fdff3d45504073c0319c9ab97b95893b2661b646eb59b517e0dc82aed05d WatchSource:0}: Error finding container ae67fdff3d45504073c0319c9ab97b95893b2661b646eb59b517e0dc82aed05d: Status 404 returned error can't find the container with id ae67fdff3d45504073c0319c9ab97b95893b2661b646eb59b517e0dc82aed05d Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.276422 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-g6l87"] Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.592611 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.670725 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-w7w7n"] Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.671011 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" podUID="c3feeb57-c944-48d9-ac9b-d66991cb5bf4" containerName="dnsmasq-dns" containerID="cri-o://a9fe4b9b283d7b6bf3654e51d58d8b3e773664e27da95bc06af772ffd5a95f48" gracePeriod=10 Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.701745 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.744796 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.948319 4802 generic.go:334] "Generic (PLEG): container finished" podID="c3feeb57-c944-48d9-ac9b-d66991cb5bf4" containerID="a9fe4b9b283d7b6bf3654e51d58d8b3e773664e27da95bc06af772ffd5a95f48" exitCode=0 Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.948395 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" event={"ID":"c3feeb57-c944-48d9-ac9b-d66991cb5bf4","Type":"ContainerDied","Data":"a9fe4b9b283d7b6bf3654e51d58d8b3e773664e27da95bc06af772ffd5a95f48"} Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.950110 4802 generic.go:334] "Generic (PLEG): container finished" podID="6005e25a-ff70-4893-864d-f17cd5715536" containerID="ecf3392fab271a57083fe40d0af703db262223467ab728949e080cccc92f71cf" exitCode=0 Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.950174 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wfcjk" event={"ID":"6005e25a-ff70-4893-864d-f17cd5715536","Type":"ContainerDied","Data":"ecf3392fab271a57083fe40d0af703db262223467ab728949e080cccc92f71cf"} Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.950193 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wfcjk" event={"ID":"6005e25a-ff70-4893-864d-f17cd5715536","Type":"ContainerStarted","Data":"f37ba2a680d24576709325677da929778f0d2e534dfdc73de51e35576d300cea"} Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.951857 4802 generic.go:334] "Generic (PLEG): container finished" podID="4cd1006b-e193-4f3b-b81f-c0147c185ee5" containerID="a587f60628b5ace4ec60bd2bf5af527d23c93d79e0ccfe1c6755281fac07d999" exitCode=0 Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.951981 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g6l87" event={"ID":"4cd1006b-e193-4f3b-b81f-c0147c185ee5","Type":"ContainerDied","Data":"a587f60628b5ace4ec60bd2bf5af527d23c93d79e0ccfe1c6755281fac07d999"} Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.952007 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g6l87" event={"ID":"4cd1006b-e193-4f3b-b81f-c0147c185ee5","Type":"ContainerStarted","Data":"ae67fdff3d45504073c0319c9ab97b95893b2661b646eb59b517e0dc82aed05d"} Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.953442 4802 generic.go:334] "Generic (PLEG): container finished" podID="342d0d97-3dda-4dde-b14c-1b7465e68e0b" containerID="d03f1589783209290cc609faef289c68be5893adda751858f5818d64f25b6204" exitCode=0 Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.953909 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zxhcq" event={"ID":"342d0d97-3dda-4dde-b14c-1b7465e68e0b","Type":"ContainerDied","Data":"d03f1589783209290cc609faef289c68be5893adda751858f5818d64f25b6204"} Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.953947 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zxhcq" event={"ID":"342d0d97-3dda-4dde-b14c-1b7465e68e0b","Type":"ContainerStarted","Data":"bf5c2959dd36ce2ba3507431901209b47fcae3be7636eb6840aa1b1333806358"} Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.953976 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.954332 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.954889 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fe18510a-9ff5-431e-8f89-be3ede5a5bd4" containerName="cinder-scheduler" containerID="cri-o://7c76ea5cc3c8c7fc28722b9186013b3c0b1dfe413a462539747bc8d12eb8197d" gracePeriod=30 Oct 04 05:07:26 crc kubenswrapper[4802]: I1004 05:07:26.954939 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fe18510a-9ff5-431e-8f89-be3ede5a5bd4" containerName="probe" containerID="cri-o://d5c88f26e70e3632dba82ef5e30a6f53328e6b77ff37efbb3098a32bc9027002" gracePeriod=30 Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.114415 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.122781 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.134635 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.142373 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:27 crc kubenswrapper[4802]: E1004 05:07:27.142847 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="ceilometer-notification-agent" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.142866 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="ceilometer-notification-agent" Oct 04 05:07:27 crc kubenswrapper[4802]: E1004 05:07:27.142888 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="sg-core" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.142896 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="sg-core" Oct 04 05:07:27 crc kubenswrapper[4802]: E1004 05:07:27.142915 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541768b2-0585-41dd-9321-cc5b28a3053b" containerName="cinder-api-log" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.142923 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="541768b2-0585-41dd-9321-cc5b28a3053b" containerName="cinder-api-log" Oct 04 05:07:27 crc kubenswrapper[4802]: E1004 05:07:27.142945 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="proxy-httpd" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.142952 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="proxy-httpd" Oct 04 05:07:27 crc kubenswrapper[4802]: E1004 05:07:27.142965 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="ceilometer-central-agent" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.142973 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="ceilometer-central-agent" Oct 04 05:07:27 crc kubenswrapper[4802]: E1004 05:07:27.142991 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541768b2-0585-41dd-9321-cc5b28a3053b" containerName="cinder-api" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.142998 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="541768b2-0585-41dd-9321-cc5b28a3053b" containerName="cinder-api" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.143188 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="ceilometer-notification-agent" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.143207 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="541768b2-0585-41dd-9321-cc5b28a3053b" containerName="cinder-api" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.143221 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="sg-core" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.143238 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="541768b2-0585-41dd-9321-cc5b28a3053b" containerName="cinder-api-log" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.143248 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="ceilometer-central-agent" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.143266 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" containerName="proxy-httpd" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.145227 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.147704 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.147905 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.150022 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.160016 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.178448 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.180428 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.182832 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.183651 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.184954 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.191566 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.229119 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.248858 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkbd6\" (UniqueName: \"kubernetes.io/projected/9751f30f-b58e-4f5e-9990-e63ee092a495-kube-api-access-bkbd6\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.248896 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mprx7\" (UniqueName: \"kubernetes.io/projected/e4c99ba7-5bb7-4890-87e8-917d67ea382b-kube-api-access-mprx7\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.248973 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-config-data-custom\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.249024 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4c99ba7-5bb7-4890-87e8-917d67ea382b-run-httpd\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.249084 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-config-data\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.249128 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9751f30f-b58e-4f5e-9990-e63ee092a495-logs\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.249187 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.249204 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4c99ba7-5bb7-4890-87e8-917d67ea382b-log-httpd\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.249286 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-scripts\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.249396 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.249464 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.249490 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.249553 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-config-data\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.249726 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9751f30f-b58e-4f5e-9990-e63ee092a495-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.249777 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-scripts\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.249794 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.351477 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmwtw\" (UniqueName: \"kubernetes.io/projected/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-kube-api-access-bmwtw\") pod \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.351714 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-ovsdbserver-sb\") pod \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.351785 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-dns-svc\") pod \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.351829 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-config\") pod \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.351860 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-ovsdbserver-nb\") pod \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\" (UID: \"c3feeb57-c944-48d9-ac9b-d66991cb5bf4\") " Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352145 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-config-data\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352172 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9751f30f-b58e-4f5e-9990-e63ee092a495-logs\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352193 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352211 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4c99ba7-5bb7-4890-87e8-917d67ea382b-log-httpd\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352233 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-scripts\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352264 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352310 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352335 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352379 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-config-data\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352453 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9751f30f-b58e-4f5e-9990-e63ee092a495-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352486 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-scripts\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352506 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352559 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkbd6\" (UniqueName: \"kubernetes.io/projected/9751f30f-b58e-4f5e-9990-e63ee092a495-kube-api-access-bkbd6\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352581 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mprx7\" (UniqueName: \"kubernetes.io/projected/e4c99ba7-5bb7-4890-87e8-917d67ea382b-kube-api-access-mprx7\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352600 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-config-data-custom\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352636 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4c99ba7-5bb7-4890-87e8-917d67ea382b-run-httpd\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.352638 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9751f30f-b58e-4f5e-9990-e63ee092a495-logs\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.373201 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9751f30f-b58e-4f5e-9990-e63ee092a495-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.373864 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4c99ba7-5bb7-4890-87e8-917d67ea382b-log-httpd\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.376945 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4c99ba7-5bb7-4890-87e8-917d67ea382b-run-httpd\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.377440 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-config-data\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.378383 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-scripts\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.378841 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-kube-api-access-bmwtw" (OuterVolumeSpecName: "kube-api-access-bmwtw") pod "c3feeb57-c944-48d9-ac9b-d66991cb5bf4" (UID: "c3feeb57-c944-48d9-ac9b-d66991cb5bf4"). InnerVolumeSpecName "kube-api-access-bmwtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.381264 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.401476 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.403308 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-scripts\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.411829 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-config-data-custom\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.412470 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.412693 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-config-data\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.413105 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9751f30f-b58e-4f5e-9990-e63ee092a495-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.413681 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mprx7\" (UniqueName: \"kubernetes.io/projected/e4c99ba7-5bb7-4890-87e8-917d67ea382b-kube-api-access-mprx7\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.413710 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkbd6\" (UniqueName: \"kubernetes.io/projected/9751f30f-b58e-4f5e-9990-e63ee092a495-kube-api-access-bkbd6\") pod \"cinder-api-0\" (UID: \"9751f30f-b58e-4f5e-9990-e63ee092a495\") " pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.423389 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.436234 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3feeb57-c944-48d9-ac9b-d66991cb5bf4" (UID: "c3feeb57-c944-48d9-ac9b-d66991cb5bf4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.436303 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3feeb57-c944-48d9-ac9b-d66991cb5bf4" (UID: "c3feeb57-c944-48d9-ac9b-d66991cb5bf4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.448008 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3feeb57-c944-48d9-ac9b-d66991cb5bf4" (UID: "c3feeb57-c944-48d9-ac9b-d66991cb5bf4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.452839 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-config" (OuterVolumeSpecName: "config") pod "c3feeb57-c944-48d9-ac9b-d66991cb5bf4" (UID: "c3feeb57-c944-48d9-ac9b-d66991cb5bf4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.454057 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.454135 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.454205 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.454261 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.454315 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmwtw\" (UniqueName: \"kubernetes.io/projected/c3feeb57-c944-48d9-ac9b-d66991cb5bf4-kube-api-access-bmwtw\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.540065 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.559784 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.973655 4802 generic.go:334] "Generic (PLEG): container finished" podID="fe18510a-9ff5-431e-8f89-be3ede5a5bd4" containerID="d5c88f26e70e3632dba82ef5e30a6f53328e6b77ff37efbb3098a32bc9027002" exitCode=0 Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.974005 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fe18510a-9ff5-431e-8f89-be3ede5a5bd4","Type":"ContainerDied","Data":"d5c88f26e70e3632dba82ef5e30a6f53328e6b77ff37efbb3098a32bc9027002"} Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.976009 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.979579 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-w7w7n" event={"ID":"c3feeb57-c944-48d9-ac9b-d66991cb5bf4","Type":"ContainerDied","Data":"6d62f092b66fac52edc7722e1cb3a808bec8878faab4be1ebd40ad7357cd3e38"} Oct 04 05:07:27 crc kubenswrapper[4802]: I1004 05:07:27.979681 4802 scope.go:117] "RemoveContainer" containerID="a9fe4b9b283d7b6bf3654e51d58d8b3e773664e27da95bc06af772ffd5a95f48" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.021227 4802 scope.go:117] "RemoveContainer" containerID="6c1b3e18636973c8ab11413d89236799a8d6413679a182e7eebfde1917a083bb" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.036115 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:28 crc kubenswrapper[4802]: W1004 05:07:28.042076 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4c99ba7_5bb7_4890_87e8_917d67ea382b.slice/crio-19866d8b589b3c9b7a8abb12ce0e59c20f738b94db7821effd6bfbc71019e15a WatchSource:0}: Error finding container 19866d8b589b3c9b7a8abb12ce0e59c20f738b94db7821effd6bfbc71019e15a: Status 404 returned error can't find the container with id 19866d8b589b3c9b7a8abb12ce0e59c20f738b94db7821effd6bfbc71019e15a Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.043815 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-w7w7n"] Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.052393 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-w7w7n"] Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.111542 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 04 05:07:28 crc kubenswrapper[4802]: W1004 05:07:28.122784 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9751f30f_b58e_4f5e_9990_e63ee092a495.slice/crio-fe3de2313dbd3942027357c77f1642287923b5b7b4ef44cb629c433d924d72aa WatchSource:0}: Error finding container fe3de2313dbd3942027357c77f1642287923b5b7b4ef44cb629c433d924d72aa: Status 404 returned error can't find the container with id fe3de2313dbd3942027357c77f1642287923b5b7b4ef44cb629c433d924d72aa Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.382401 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g6l87" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.392820 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5039c27d-3446-4947-ae13-7cb899a83c71" path="/var/lib/kubelet/pods/5039c27d-3446-4947-ae13-7cb899a83c71/volumes" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.393816 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="541768b2-0585-41dd-9321-cc5b28a3053b" path="/var/lib/kubelet/pods/541768b2-0585-41dd-9321-cc5b28a3053b/volumes" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.394745 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3feeb57-c944-48d9-ac9b-d66991cb5bf4" path="/var/lib/kubelet/pods/c3feeb57-c944-48d9-ac9b-d66991cb5bf4/volumes" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.492878 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wfcjk" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.516385 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zxhcq" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.532491 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9lfs\" (UniqueName: \"kubernetes.io/projected/4cd1006b-e193-4f3b-b81f-c0147c185ee5-kube-api-access-j9lfs\") pod \"4cd1006b-e193-4f3b-b81f-c0147c185ee5\" (UID: \"4cd1006b-e193-4f3b-b81f-c0147c185ee5\") " Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.539358 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd1006b-e193-4f3b-b81f-c0147c185ee5-kube-api-access-j9lfs" (OuterVolumeSpecName: "kube-api-access-j9lfs") pod "4cd1006b-e193-4f3b-b81f-c0147c185ee5" (UID: "4cd1006b-e193-4f3b-b81f-c0147c185ee5"). InnerVolumeSpecName "kube-api-access-j9lfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.634565 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8blp\" (UniqueName: \"kubernetes.io/projected/6005e25a-ff70-4893-864d-f17cd5715536-kube-api-access-n8blp\") pod \"6005e25a-ff70-4893-864d-f17cd5715536\" (UID: \"6005e25a-ff70-4893-864d-f17cd5715536\") " Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.634754 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmx7z\" (UniqueName: \"kubernetes.io/projected/342d0d97-3dda-4dde-b14c-1b7465e68e0b-kube-api-access-mmx7z\") pod \"342d0d97-3dda-4dde-b14c-1b7465e68e0b\" (UID: \"342d0d97-3dda-4dde-b14c-1b7465e68e0b\") " Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.635348 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9lfs\" (UniqueName: \"kubernetes.io/projected/4cd1006b-e193-4f3b-b81f-c0147c185ee5-kube-api-access-j9lfs\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.638855 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6005e25a-ff70-4893-864d-f17cd5715536-kube-api-access-n8blp" (OuterVolumeSpecName: "kube-api-access-n8blp") pod "6005e25a-ff70-4893-864d-f17cd5715536" (UID: "6005e25a-ff70-4893-864d-f17cd5715536"). InnerVolumeSpecName "kube-api-access-n8blp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.639918 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/342d0d97-3dda-4dde-b14c-1b7465e68e0b-kube-api-access-mmx7z" (OuterVolumeSpecName: "kube-api-access-mmx7z") pod "342d0d97-3dda-4dde-b14c-1b7465e68e0b" (UID: "342d0d97-3dda-4dde-b14c-1b7465e68e0b"). InnerVolumeSpecName "kube-api-access-mmx7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.737453 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8blp\" (UniqueName: \"kubernetes.io/projected/6005e25a-ff70-4893-864d-f17cd5715536-kube-api-access-n8blp\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.737485 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmx7z\" (UniqueName: \"kubernetes.io/projected/342d0d97-3dda-4dde-b14c-1b7465e68e0b-kube-api-access-mmx7z\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.989852 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zxhcq" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.989852 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zxhcq" event={"ID":"342d0d97-3dda-4dde-b14c-1b7465e68e0b","Type":"ContainerDied","Data":"bf5c2959dd36ce2ba3507431901209b47fcae3be7636eb6840aa1b1333806358"} Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.990204 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf5c2959dd36ce2ba3507431901209b47fcae3be7636eb6840aa1b1333806358" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.992400 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9751f30f-b58e-4f5e-9990-e63ee092a495","Type":"ContainerStarted","Data":"534f622afaec3c3649d40b6e4e2b5f9e5445a234867869e3977a097e9641a44a"} Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.992437 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9751f30f-b58e-4f5e-9990-e63ee092a495","Type":"ContainerStarted","Data":"fe3de2313dbd3942027357c77f1642287923b5b7b4ef44cb629c433d924d72aa"} Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.999300 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wfcjk" event={"ID":"6005e25a-ff70-4893-864d-f17cd5715536","Type":"ContainerDied","Data":"f37ba2a680d24576709325677da929778f0d2e534dfdc73de51e35576d300cea"} Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.999333 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f37ba2a680d24576709325677da929778f0d2e534dfdc73de51e35576d300cea" Oct 04 05:07:28 crc kubenswrapper[4802]: I1004 05:07:28.999390 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wfcjk" Oct 04 05:07:29 crc kubenswrapper[4802]: I1004 05:07:29.011188 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g6l87" event={"ID":"4cd1006b-e193-4f3b-b81f-c0147c185ee5","Type":"ContainerDied","Data":"ae67fdff3d45504073c0319c9ab97b95893b2661b646eb59b517e0dc82aed05d"} Oct 04 05:07:29 crc kubenswrapper[4802]: I1004 05:07:29.011235 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae67fdff3d45504073c0319c9ab97b95893b2661b646eb59b517e0dc82aed05d" Oct 04 05:07:29 crc kubenswrapper[4802]: I1004 05:07:29.011294 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g6l87" Oct 04 05:07:29 crc kubenswrapper[4802]: I1004 05:07:29.019022 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4c99ba7-5bb7-4890-87e8-917d67ea382b","Type":"ContainerStarted","Data":"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67"} Oct 04 05:07:29 crc kubenswrapper[4802]: I1004 05:07:29.019104 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4c99ba7-5bb7-4890-87e8-917d67ea382b","Type":"ContainerStarted","Data":"19866d8b589b3c9b7a8abb12ce0e59c20f738b94db7821effd6bfbc71019e15a"} Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.036460 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4c99ba7-5bb7-4890-87e8-917d67ea382b","Type":"ContainerStarted","Data":"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21"} Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.044460 4802 generic.go:334] "Generic (PLEG): container finished" podID="fe18510a-9ff5-431e-8f89-be3ede5a5bd4" containerID="7c76ea5cc3c8c7fc28722b9186013b3c0b1dfe413a462539747bc8d12eb8197d" exitCode=0 Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.044531 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fe18510a-9ff5-431e-8f89-be3ede5a5bd4","Type":"ContainerDied","Data":"7c76ea5cc3c8c7fc28722b9186013b3c0b1dfe413a462539747bc8d12eb8197d"} Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.046979 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9751f30f-b58e-4f5e-9990-e63ee092a495","Type":"ContainerStarted","Data":"5d1db943f463ad020083d5067c536e84d9f271f1f02b89e4c986cb24886fd1de"} Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.048368 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.079116 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.079091997 podStartE2EDuration="3.079091997s" podCreationTimestamp="2025-10-04 05:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:07:30.066136039 +0000 UTC m=+1292.474136684" watchObservedRunningTime="2025-10-04 05:07:30.079091997 +0000 UTC m=+1292.487092632" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.232504 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.373950 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-etc-machine-id\") pod \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.374039 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct6v4\" (UniqueName: \"kubernetes.io/projected/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-kube-api-access-ct6v4\") pod \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.374118 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-scripts\") pod \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.374195 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-config-data-custom\") pod \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.374260 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-config-data\") pod \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.374291 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-combined-ca-bundle\") pod \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\" (UID: \"fe18510a-9ff5-431e-8f89-be3ede5a5bd4\") " Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.389951 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-scripts" (OuterVolumeSpecName: "scripts") pod "fe18510a-9ff5-431e-8f89-be3ede5a5bd4" (UID: "fe18510a-9ff5-431e-8f89-be3ede5a5bd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.390045 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fe18510a-9ff5-431e-8f89-be3ede5a5bd4" (UID: "fe18510a-9ff5-431e-8f89-be3ede5a5bd4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.393049 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-kube-api-access-ct6v4" (OuterVolumeSpecName: "kube-api-access-ct6v4") pod "fe18510a-9ff5-431e-8f89-be3ede5a5bd4" (UID: "fe18510a-9ff5-431e-8f89-be3ede5a5bd4"). InnerVolumeSpecName "kube-api-access-ct6v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.402752 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fe18510a-9ff5-431e-8f89-be3ede5a5bd4" (UID: "fe18510a-9ff5-431e-8f89-be3ede5a5bd4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.477987 4802 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.478024 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct6v4\" (UniqueName: \"kubernetes.io/projected/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-kube-api-access-ct6v4\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.478039 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.478049 4802 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.530787 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe18510a-9ff5-431e-8f89-be3ede5a5bd4" (UID: "fe18510a-9ff5-431e-8f89-be3ede5a5bd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.558912 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-config-data" (OuterVolumeSpecName: "config-data") pod "fe18510a-9ff5-431e-8f89-be3ede5a5bd4" (UID: "fe18510a-9ff5-431e-8f89-be3ede5a5bd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.579231 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:30 crc kubenswrapper[4802]: I1004 05:07:30.579270 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe18510a-9ff5-431e-8f89-be3ede5a5bd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.058992 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4c99ba7-5bb7-4890-87e8-917d67ea382b","Type":"ContainerStarted","Data":"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1"} Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.061801 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.061845 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fe18510a-9ff5-431e-8f89-be3ede5a5bd4","Type":"ContainerDied","Data":"f120e24c84a937ef9079ccb3322828cf2a0fbf9f35ee6d844aae6acda541620c"} Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.061881 4802 scope.go:117] "RemoveContainer" containerID="d5c88f26e70e3632dba82ef5e30a6f53328e6b77ff37efbb3098a32bc9027002" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.085904 4802 scope.go:117] "RemoveContainer" containerID="7c76ea5cc3c8c7fc28722b9186013b3c0b1dfe413a462539747bc8d12eb8197d" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.093475 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.102111 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.122877 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 04 05:07:31 crc kubenswrapper[4802]: E1004 05:07:31.123285 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="342d0d97-3dda-4dde-b14c-1b7465e68e0b" containerName="mariadb-database-create" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.123304 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="342d0d97-3dda-4dde-b14c-1b7465e68e0b" containerName="mariadb-database-create" Oct 04 05:07:31 crc kubenswrapper[4802]: E1004 05:07:31.123326 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6005e25a-ff70-4893-864d-f17cd5715536" containerName="mariadb-database-create" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.123334 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="6005e25a-ff70-4893-864d-f17cd5715536" containerName="mariadb-database-create" Oct 04 05:07:31 crc kubenswrapper[4802]: E1004 05:07:31.123346 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3feeb57-c944-48d9-ac9b-d66991cb5bf4" containerName="dnsmasq-dns" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.123354 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3feeb57-c944-48d9-ac9b-d66991cb5bf4" containerName="dnsmasq-dns" Oct 04 05:07:31 crc kubenswrapper[4802]: E1004 05:07:31.123370 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd1006b-e193-4f3b-b81f-c0147c185ee5" containerName="mariadb-database-create" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.123377 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd1006b-e193-4f3b-b81f-c0147c185ee5" containerName="mariadb-database-create" Oct 04 05:07:31 crc kubenswrapper[4802]: E1004 05:07:31.123396 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3feeb57-c944-48d9-ac9b-d66991cb5bf4" containerName="init" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.123404 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3feeb57-c944-48d9-ac9b-d66991cb5bf4" containerName="init" Oct 04 05:07:31 crc kubenswrapper[4802]: E1004 05:07:31.123417 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe18510a-9ff5-431e-8f89-be3ede5a5bd4" containerName="probe" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.123423 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe18510a-9ff5-431e-8f89-be3ede5a5bd4" containerName="probe" Oct 04 05:07:31 crc kubenswrapper[4802]: E1004 05:07:31.123438 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe18510a-9ff5-431e-8f89-be3ede5a5bd4" containerName="cinder-scheduler" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.123447 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe18510a-9ff5-431e-8f89-be3ede5a5bd4" containerName="cinder-scheduler" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.123669 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="342d0d97-3dda-4dde-b14c-1b7465e68e0b" containerName="mariadb-database-create" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.123685 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="6005e25a-ff70-4893-864d-f17cd5715536" containerName="mariadb-database-create" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.123701 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe18510a-9ff5-431e-8f89-be3ede5a5bd4" containerName="probe" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.123721 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe18510a-9ff5-431e-8f89-be3ede5a5bd4" containerName="cinder-scheduler" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.123738 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd1006b-e193-4f3b-b81f-c0147c185ee5" containerName="mariadb-database-create" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.123755 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3feeb57-c944-48d9-ac9b-d66991cb5bf4" containerName="dnsmasq-dns" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.124848 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.127630 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.137597 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.188268 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03367bc3-3554-4b43-8215-070e0d9d8c13-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.188318 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03367bc3-3554-4b43-8215-070e0d9d8c13-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.188544 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03367bc3-3554-4b43-8215-070e0d9d8c13-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.188605 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03367bc3-3554-4b43-8215-070e0d9d8c13-config-data\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.188718 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03367bc3-3554-4b43-8215-070e0d9d8c13-scripts\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.188825 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc6bh\" (UniqueName: \"kubernetes.io/projected/03367bc3-3554-4b43-8215-070e0d9d8c13-kube-api-access-hc6bh\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.289969 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03367bc3-3554-4b43-8215-070e0d9d8c13-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.290017 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03367bc3-3554-4b43-8215-070e0d9d8c13-config-data\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.290060 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03367bc3-3554-4b43-8215-070e0d9d8c13-scripts\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.290094 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc6bh\" (UniqueName: \"kubernetes.io/projected/03367bc3-3554-4b43-8215-070e0d9d8c13-kube-api-access-hc6bh\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.290170 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03367bc3-3554-4b43-8215-070e0d9d8c13-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.290201 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03367bc3-3554-4b43-8215-070e0d9d8c13-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.290323 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03367bc3-3554-4b43-8215-070e0d9d8c13-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.293633 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03367bc3-3554-4b43-8215-070e0d9d8c13-scripts\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.297266 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03367bc3-3554-4b43-8215-070e0d9d8c13-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.297717 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03367bc3-3554-4b43-8215-070e0d9d8c13-config-data\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.297917 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03367bc3-3554-4b43-8215-070e0d9d8c13-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.310114 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc6bh\" (UniqueName: \"kubernetes.io/projected/03367bc3-3554-4b43-8215-070e0d9d8c13-kube-api-access-hc6bh\") pod \"cinder-scheduler-0\" (UID: \"03367bc3-3554-4b43-8215-070e0d9d8c13\") " pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.444518 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 04 05:07:31 crc kubenswrapper[4802]: I1004 05:07:31.882525 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 04 05:07:31 crc kubenswrapper[4802]: W1004 05:07:31.900846 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03367bc3_3554_4b43_8215_070e0d9d8c13.slice/crio-1dc871c311ed1b6ba04134770b132215960afd0adc7e7753a68e6a36988c3199 WatchSource:0}: Error finding container 1dc871c311ed1b6ba04134770b132215960afd0adc7e7753a68e6a36988c3199: Status 404 returned error can't find the container with id 1dc871c311ed1b6ba04134770b132215960afd0adc7e7753a68e6a36988c3199 Oct 04 05:07:32 crc kubenswrapper[4802]: I1004 05:07:32.075405 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4c99ba7-5bb7-4890-87e8-917d67ea382b","Type":"ContainerStarted","Data":"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069"} Oct 04 05:07:32 crc kubenswrapper[4802]: I1004 05:07:32.076196 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 04 05:07:32 crc kubenswrapper[4802]: I1004 05:07:32.090024 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"03367bc3-3554-4b43-8215-070e0d9d8c13","Type":"ContainerStarted","Data":"1dc871c311ed1b6ba04134770b132215960afd0adc7e7753a68e6a36988c3199"} Oct 04 05:07:32 crc kubenswrapper[4802]: I1004 05:07:32.385156 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe18510a-9ff5-431e-8f89-be3ede5a5bd4" path="/var/lib/kubelet/pods/fe18510a-9ff5-431e-8f89-be3ede5a5bd4/volumes" Oct 04 05:07:33 crc kubenswrapper[4802]: I1004 05:07:33.107720 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"03367bc3-3554-4b43-8215-070e0d9d8c13","Type":"ContainerStarted","Data":"db4da79c8fe778765ef34d1c035357fbcc7f8c4675e31a5e2d87cab2d0fc40dd"} Oct 04 05:07:33 crc kubenswrapper[4802]: I1004 05:07:33.108099 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"03367bc3-3554-4b43-8215-070e0d9d8c13","Type":"ContainerStarted","Data":"450e8af9aab210c411e7007ca48ba72e3c1b8b83f5499195e92e16af44c860b4"} Oct 04 05:07:33 crc kubenswrapper[4802]: I1004 05:07:33.129164 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.953199262 podStartE2EDuration="6.129144722s" podCreationTimestamp="2025-10-04 05:07:27 +0000 UTC" firstStartedPulling="2025-10-04 05:07:28.052789211 +0000 UTC m=+1290.460789836" lastFinishedPulling="2025-10-04 05:07:31.228734671 +0000 UTC m=+1293.636735296" observedRunningTime="2025-10-04 05:07:32.100916415 +0000 UTC m=+1294.508917040" watchObservedRunningTime="2025-10-04 05:07:33.129144722 +0000 UTC m=+1295.537145357" Oct 04 05:07:33 crc kubenswrapper[4802]: I1004 05:07:33.132630 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.132614541 podStartE2EDuration="2.132614541s" podCreationTimestamp="2025-10-04 05:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:07:33.127785603 +0000 UTC m=+1295.535786248" watchObservedRunningTime="2025-10-04 05:07:33.132614541 +0000 UTC m=+1295.540615166" Oct 04 05:07:33 crc kubenswrapper[4802]: I1004 05:07:33.784719 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.101492 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.114606 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="ceilometer-central-agent" containerID="cri-o://e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67" gracePeriod=30 Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.114658 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="sg-core" containerID="cri-o://82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1" gracePeriod=30 Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.114783 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="ceilometer-notification-agent" containerID="cri-o://e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21" gracePeriod=30 Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.114805 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="proxy-httpd" containerID="cri-o://3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069" gracePeriod=30 Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.577464 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b8fe-account-create-klqmh"] Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.580520 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b8fe-account-create-klqmh" Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.582760 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.589030 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b8fe-account-create-klqmh"] Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.674819 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7gng\" (UniqueName: \"kubernetes.io/projected/1acdc363-88c6-46f9-b133-f8999c760804-kube-api-access-q7gng\") pod \"nova-cell0-b8fe-account-create-klqmh\" (UID: \"1acdc363-88c6-46f9-b133-f8999c760804\") " pod="openstack/nova-cell0-b8fe-account-create-klqmh" Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.777774 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7gng\" (UniqueName: \"kubernetes.io/projected/1acdc363-88c6-46f9-b133-f8999c760804-kube-api-access-q7gng\") pod \"nova-cell0-b8fe-account-create-klqmh\" (UID: \"1acdc363-88c6-46f9-b133-f8999c760804\") " pod="openstack/nova-cell0-b8fe-account-create-klqmh" Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.780489 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-a20f-account-create-6n47p"] Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.789954 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a20f-account-create-6n47p" Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.792120 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.796159 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a20f-account-create-6n47p"] Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.820980 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7gng\" (UniqueName: \"kubernetes.io/projected/1acdc363-88c6-46f9-b133-f8999c760804-kube-api-access-q7gng\") pod \"nova-cell0-b8fe-account-create-klqmh\" (UID: \"1acdc363-88c6-46f9-b133-f8999c760804\") " pod="openstack/nova-cell0-b8fe-account-create-klqmh" Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.879966 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwh2s\" (UniqueName: \"kubernetes.io/projected/4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b-kube-api-access-pwh2s\") pod \"nova-cell1-a20f-account-create-6n47p\" (UID: \"4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b\") " pod="openstack/nova-cell1-a20f-account-create-6n47p" Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.979264 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.981529 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwh2s\" (UniqueName: \"kubernetes.io/projected/4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b-kube-api-access-pwh2s\") pod \"nova-cell1-a20f-account-create-6n47p\" (UID: \"4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b\") " pod="openstack/nova-cell1-a20f-account-create-6n47p" Oct 04 05:07:34 crc kubenswrapper[4802]: I1004 05:07:34.993919 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b8fe-account-create-klqmh" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.003177 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwh2s\" (UniqueName: \"kubernetes.io/projected/4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b-kube-api-access-pwh2s\") pod \"nova-cell1-a20f-account-create-6n47p\" (UID: \"4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b\") " pod="openstack/nova-cell1-a20f-account-create-6n47p" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.084895 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4c99ba7-5bb7-4890-87e8-917d67ea382b-run-httpd\") pod \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.084994 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-sg-core-conf-yaml\") pod \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.085046 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-config-data\") pod \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.085134 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-scripts\") pod \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.085208 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-combined-ca-bundle\") pod \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.085283 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mprx7\" (UniqueName: \"kubernetes.io/projected/e4c99ba7-5bb7-4890-87e8-917d67ea382b-kube-api-access-mprx7\") pod \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.085347 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4c99ba7-5bb7-4890-87e8-917d67ea382b-log-httpd\") pod \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\" (UID: \"e4c99ba7-5bb7-4890-87e8-917d67ea382b\") " Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.087041 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4c99ba7-5bb7-4890-87e8-917d67ea382b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e4c99ba7-5bb7-4890-87e8-917d67ea382b" (UID: "e4c99ba7-5bb7-4890-87e8-917d67ea382b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.087064 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4c99ba7-5bb7-4890-87e8-917d67ea382b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e4c99ba7-5bb7-4890-87e8-917d67ea382b" (UID: "e4c99ba7-5bb7-4890-87e8-917d67ea382b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.092584 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-scripts" (OuterVolumeSpecName: "scripts") pod "e4c99ba7-5bb7-4890-87e8-917d67ea382b" (UID: "e4c99ba7-5bb7-4890-87e8-917d67ea382b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.092711 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c99ba7-5bb7-4890-87e8-917d67ea382b-kube-api-access-mprx7" (OuterVolumeSpecName: "kube-api-access-mprx7") pod "e4c99ba7-5bb7-4890-87e8-917d67ea382b" (UID: "e4c99ba7-5bb7-4890-87e8-917d67ea382b"). InnerVolumeSpecName "kube-api-access-mprx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.120521 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e4c99ba7-5bb7-4890-87e8-917d67ea382b" (UID: "e4c99ba7-5bb7-4890-87e8-917d67ea382b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.141942 4802 generic.go:334] "Generic (PLEG): container finished" podID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerID="3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069" exitCode=0 Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.142171 4802 generic.go:334] "Generic (PLEG): container finished" podID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerID="82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1" exitCode=2 Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.142183 4802 generic.go:334] "Generic (PLEG): container finished" podID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerID="e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21" exitCode=0 Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.142191 4802 generic.go:334] "Generic (PLEG): container finished" podID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerID="e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67" exitCode=0 Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.142267 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.142994 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4c99ba7-5bb7-4890-87e8-917d67ea382b","Type":"ContainerDied","Data":"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069"} Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.143020 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4c99ba7-5bb7-4890-87e8-917d67ea382b","Type":"ContainerDied","Data":"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1"} Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.143032 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4c99ba7-5bb7-4890-87e8-917d67ea382b","Type":"ContainerDied","Data":"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21"} Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.143041 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4c99ba7-5bb7-4890-87e8-917d67ea382b","Type":"ContainerDied","Data":"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67"} Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.143050 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4c99ba7-5bb7-4890-87e8-917d67ea382b","Type":"ContainerDied","Data":"19866d8b589b3c9b7a8abb12ce0e59c20f738b94db7821effd6bfbc71019e15a"} Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.143065 4802 scope.go:117] "RemoveContainer" containerID="3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.161806 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4c99ba7-5bb7-4890-87e8-917d67ea382b" (UID: "e4c99ba7-5bb7-4890-87e8-917d67ea382b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.166019 4802 scope.go:117] "RemoveContainer" containerID="82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.180941 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a20f-account-create-6n47p" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.187923 4802 scope.go:117] "RemoveContainer" containerID="e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.187930 4802 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4c99ba7-5bb7-4890-87e8-917d67ea382b-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.188328 4802 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4c99ba7-5bb7-4890-87e8-917d67ea382b-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.188345 4802 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.188357 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.188369 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.188383 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mprx7\" (UniqueName: \"kubernetes.io/projected/e4c99ba7-5bb7-4890-87e8-917d67ea382b-kube-api-access-mprx7\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.214428 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-config-data" (OuterVolumeSpecName: "config-data") pod "e4c99ba7-5bb7-4890-87e8-917d67ea382b" (UID: "e4c99ba7-5bb7-4890-87e8-917d67ea382b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.218211 4802 scope.go:117] "RemoveContainer" containerID="e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.289609 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c99ba7-5bb7-4890-87e8-917d67ea382b-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.317872 4802 scope.go:117] "RemoveContainer" containerID="3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069" Oct 04 05:07:35 crc kubenswrapper[4802]: E1004 05:07:35.318296 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069\": container with ID starting with 3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069 not found: ID does not exist" containerID="3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.318323 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069"} err="failed to get container status \"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069\": rpc error: code = NotFound desc = could not find container \"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069\": container with ID starting with 3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.318351 4802 scope.go:117] "RemoveContainer" containerID="82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1" Oct 04 05:07:35 crc kubenswrapper[4802]: E1004 05:07:35.319144 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1\": container with ID starting with 82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1 not found: ID does not exist" containerID="82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.319195 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1"} err="failed to get container status \"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1\": rpc error: code = NotFound desc = could not find container \"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1\": container with ID starting with 82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.319228 4802 scope.go:117] "RemoveContainer" containerID="e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21" Oct 04 05:07:35 crc kubenswrapper[4802]: E1004 05:07:35.319555 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21\": container with ID starting with e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21 not found: ID does not exist" containerID="e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.319580 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21"} err="failed to get container status \"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21\": rpc error: code = NotFound desc = could not find container \"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21\": container with ID starting with e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.319599 4802 scope.go:117] "RemoveContainer" containerID="e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67" Oct 04 05:07:35 crc kubenswrapper[4802]: E1004 05:07:35.319921 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67\": container with ID starting with e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67 not found: ID does not exist" containerID="e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.319942 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67"} err="failed to get container status \"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67\": rpc error: code = NotFound desc = could not find container \"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67\": container with ID starting with e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.319957 4802 scope.go:117] "RemoveContainer" containerID="3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.320160 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069"} err="failed to get container status \"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069\": rpc error: code = NotFound desc = could not find container \"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069\": container with ID starting with 3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.320179 4802 scope.go:117] "RemoveContainer" containerID="82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.320369 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1"} err="failed to get container status \"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1\": rpc error: code = NotFound desc = could not find container \"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1\": container with ID starting with 82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.320387 4802 scope.go:117] "RemoveContainer" containerID="e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.320891 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21"} err="failed to get container status \"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21\": rpc error: code = NotFound desc = could not find container \"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21\": container with ID starting with e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.320916 4802 scope.go:117] "RemoveContainer" containerID="e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.321291 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67"} err="failed to get container status \"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67\": rpc error: code = NotFound desc = could not find container \"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67\": container with ID starting with e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.321324 4802 scope.go:117] "RemoveContainer" containerID="3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.322513 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069"} err="failed to get container status \"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069\": rpc error: code = NotFound desc = could not find container \"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069\": container with ID starting with 3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.322546 4802 scope.go:117] "RemoveContainer" containerID="82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.322803 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1"} err="failed to get container status \"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1\": rpc error: code = NotFound desc = could not find container \"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1\": container with ID starting with 82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.322829 4802 scope.go:117] "RemoveContainer" containerID="e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.323080 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21"} err="failed to get container status \"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21\": rpc error: code = NotFound desc = could not find container \"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21\": container with ID starting with e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.323116 4802 scope.go:117] "RemoveContainer" containerID="e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.323343 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67"} err="failed to get container status \"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67\": rpc error: code = NotFound desc = could not find container \"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67\": container with ID starting with e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.323372 4802 scope.go:117] "RemoveContainer" containerID="3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.323595 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069"} err="failed to get container status \"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069\": rpc error: code = NotFound desc = could not find container \"3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069\": container with ID starting with 3be840bb71917b7c40e3005dcc51317801eced603f3944dc58b98dd63b6e8069 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.323626 4802 scope.go:117] "RemoveContainer" containerID="82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.323956 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1"} err="failed to get container status \"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1\": rpc error: code = NotFound desc = could not find container \"82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1\": container with ID starting with 82a16288ac9045990f6d54fddc2d9502d501c8ca5a5f591e650de85ca58ddae1 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.324044 4802 scope.go:117] "RemoveContainer" containerID="e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.324412 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21"} err="failed to get container status \"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21\": rpc error: code = NotFound desc = could not find container \"e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21\": container with ID starting with e6ab5e01673a3d76eb1c6b6b37e4789fa1b439f1cbf8bada7a78455d2001ad21 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.324435 4802 scope.go:117] "RemoveContainer" containerID="e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.324705 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67"} err="failed to get container status \"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67\": rpc error: code = NotFound desc = could not find container \"e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67\": container with ID starting with e62f3a3ed955f75c093c2ae2866dd7a7df5e2d41562c5b4400e9a8da8b9bee67 not found: ID does not exist" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.474277 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b8fe-account-create-klqmh"] Oct 04 05:07:35 crc kubenswrapper[4802]: W1004 05:07:35.480549 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1acdc363_88c6_46f9_b133_f8999c760804.slice/crio-c67dfce04bd7c26aae47234789a1cc2c78df164cd0160790b23017999b5c0406 WatchSource:0}: Error finding container c67dfce04bd7c26aae47234789a1cc2c78df164cd0160790b23017999b5c0406: Status 404 returned error can't find the container with id c67dfce04bd7c26aae47234789a1cc2c78df164cd0160790b23017999b5c0406 Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.485178 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.494158 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.504311 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:35 crc kubenswrapper[4802]: E1004 05:07:35.504716 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="proxy-httpd" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.504735 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="proxy-httpd" Oct 04 05:07:35 crc kubenswrapper[4802]: E1004 05:07:35.504753 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="sg-core" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.504761 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="sg-core" Oct 04 05:07:35 crc kubenswrapper[4802]: E1004 05:07:35.504771 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="ceilometer-notification-agent" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.504778 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="ceilometer-notification-agent" Oct 04 05:07:35 crc kubenswrapper[4802]: E1004 05:07:35.504796 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="ceilometer-central-agent" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.504803 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="ceilometer-central-agent" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.504980 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="ceilometer-notification-agent" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.504992 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="sg-core" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.505002 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="proxy-httpd" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.505018 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" containerName="ceilometer-central-agent" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.506696 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.508983 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.509271 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.523014 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.596234 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkx8b\" (UniqueName: \"kubernetes.io/projected/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-kube-api-access-lkx8b\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.596546 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-run-httpd\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.596565 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.596588 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.596604 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-scripts\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.596675 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-log-httpd\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.596699 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-config-data\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.646074 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a20f-account-create-6n47p"] Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.698451 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkx8b\" (UniqueName: \"kubernetes.io/projected/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-kube-api-access-lkx8b\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.698526 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-run-httpd\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.698555 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.698588 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-scripts\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.698605 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.698714 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-log-httpd\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.698744 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-config-data\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.700488 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-run-httpd\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.700568 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-log-httpd\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.704112 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-scripts\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.707626 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-config-data\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.707997 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.711408 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.716152 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkx8b\" (UniqueName: \"kubernetes.io/projected/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-kube-api-access-lkx8b\") pod \"ceilometer-0\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " pod="openstack/ceilometer-0" Oct 04 05:07:35 crc kubenswrapper[4802]: I1004 05:07:35.838002 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:07:36 crc kubenswrapper[4802]: I1004 05:07:36.153061 4802 generic.go:334] "Generic (PLEG): container finished" podID="1acdc363-88c6-46f9-b133-f8999c760804" containerID="9f6be82a578c052f2e5c92c3fc66bb9bba6ca2bd61c980a2efb7eed5d0a797be" exitCode=0 Oct 04 05:07:36 crc kubenswrapper[4802]: I1004 05:07:36.153120 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b8fe-account-create-klqmh" event={"ID":"1acdc363-88c6-46f9-b133-f8999c760804","Type":"ContainerDied","Data":"9f6be82a578c052f2e5c92c3fc66bb9bba6ca2bd61c980a2efb7eed5d0a797be"} Oct 04 05:07:36 crc kubenswrapper[4802]: I1004 05:07:36.153399 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b8fe-account-create-klqmh" event={"ID":"1acdc363-88c6-46f9-b133-f8999c760804","Type":"ContainerStarted","Data":"c67dfce04bd7c26aae47234789a1cc2c78df164cd0160790b23017999b5c0406"} Oct 04 05:07:36 crc kubenswrapper[4802]: I1004 05:07:36.155156 4802 generic.go:334] "Generic (PLEG): container finished" podID="4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b" containerID="9ea686ffd85d62daa78fed317c37819b25cc6f73417425e9276f30b17eea0d1c" exitCode=0 Oct 04 05:07:36 crc kubenswrapper[4802]: I1004 05:07:36.155205 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a20f-account-create-6n47p" event={"ID":"4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b","Type":"ContainerDied","Data":"9ea686ffd85d62daa78fed317c37819b25cc6f73417425e9276f30b17eea0d1c"} Oct 04 05:07:36 crc kubenswrapper[4802]: I1004 05:07:36.155236 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a20f-account-create-6n47p" event={"ID":"4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b","Type":"ContainerStarted","Data":"09b04c4aca4bf317bc4f38dc619e49a8d5fb86946eafea9e7d06d4461175916f"} Oct 04 05:07:36 crc kubenswrapper[4802]: I1004 05:07:36.280790 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:36 crc kubenswrapper[4802]: I1004 05:07:36.369874 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c99ba7-5bb7-4890-87e8-917d67ea382b" path="/var/lib/kubelet/pods/e4c99ba7-5bb7-4890-87e8-917d67ea382b/volumes" Oct 04 05:07:36 crc kubenswrapper[4802]: I1004 05:07:36.447726 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 04 05:07:36 crc kubenswrapper[4802]: I1004 05:07:36.682375 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77c7dfb8d9-7pqjl" Oct 04 05:07:36 crc kubenswrapper[4802]: I1004 05:07:36.761307 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8b55b6676-mqh5g"] Oct 04 05:07:36 crc kubenswrapper[4802]: I1004 05:07:36.761543 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8b55b6676-mqh5g" podUID="72327efa-0833-4cf0-bc74-72186ffab61d" containerName="neutron-api" containerID="cri-o://aad0ce4cedcad2607951140a87139e9b6eac1b050f7651e53cc40b494c9496fd" gracePeriod=30 Oct 04 05:07:36 crc kubenswrapper[4802]: I1004 05:07:36.762015 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8b55b6676-mqh5g" podUID="72327efa-0833-4cf0-bc74-72186ffab61d" containerName="neutron-httpd" containerID="cri-o://6e90d1518d93c4004b110815414000891842f834b62f02e3e5dc9ed713252cc7" gracePeriod=30 Oct 04 05:07:37 crc kubenswrapper[4802]: I1004 05:07:37.163839 4802 generic.go:334] "Generic (PLEG): container finished" podID="72327efa-0833-4cf0-bc74-72186ffab61d" containerID="6e90d1518d93c4004b110815414000891842f834b62f02e3e5dc9ed713252cc7" exitCode=0 Oct 04 05:07:37 crc kubenswrapper[4802]: I1004 05:07:37.163908 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8b55b6676-mqh5g" event={"ID":"72327efa-0833-4cf0-bc74-72186ffab61d","Type":"ContainerDied","Data":"6e90d1518d93c4004b110815414000891842f834b62f02e3e5dc9ed713252cc7"} Oct 04 05:07:37 crc kubenswrapper[4802]: I1004 05:07:37.165916 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"654fef6a-ec9f-4ce0-b18c-497d0647cf2e","Type":"ContainerStarted","Data":"7cfc9de45bbae9fcd3fca0aaa259f8122a51b65f87170d308cfb9f647437d500"} Oct 04 05:07:37 crc kubenswrapper[4802]: I1004 05:07:37.337938 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:37 crc kubenswrapper[4802]: I1004 05:07:37.558179 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b8fe-account-create-klqmh" Oct 04 05:07:37 crc kubenswrapper[4802]: I1004 05:07:37.633256 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7gng\" (UniqueName: \"kubernetes.io/projected/1acdc363-88c6-46f9-b133-f8999c760804-kube-api-access-q7gng\") pod \"1acdc363-88c6-46f9-b133-f8999c760804\" (UID: \"1acdc363-88c6-46f9-b133-f8999c760804\") " Oct 04 05:07:37 crc kubenswrapper[4802]: I1004 05:07:37.639872 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1acdc363-88c6-46f9-b133-f8999c760804-kube-api-access-q7gng" (OuterVolumeSpecName: "kube-api-access-q7gng") pod "1acdc363-88c6-46f9-b133-f8999c760804" (UID: "1acdc363-88c6-46f9-b133-f8999c760804"). InnerVolumeSpecName "kube-api-access-q7gng". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:37 crc kubenswrapper[4802]: I1004 05:07:37.726390 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a20f-account-create-6n47p" Oct 04 05:07:37 crc kubenswrapper[4802]: I1004 05:07:37.735280 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7gng\" (UniqueName: \"kubernetes.io/projected/1acdc363-88c6-46f9-b133-f8999c760804-kube-api-access-q7gng\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:37 crc kubenswrapper[4802]: I1004 05:07:37.836136 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwh2s\" (UniqueName: \"kubernetes.io/projected/4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b-kube-api-access-pwh2s\") pod \"4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b\" (UID: \"4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b\") " Oct 04 05:07:37 crc kubenswrapper[4802]: I1004 05:07:37.848529 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b-kube-api-access-pwh2s" (OuterVolumeSpecName: "kube-api-access-pwh2s") pod "4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b" (UID: "4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b"). InnerVolumeSpecName "kube-api-access-pwh2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:37 crc kubenswrapper[4802]: I1004 05:07:37.939086 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwh2s\" (UniqueName: \"kubernetes.io/projected/4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b-kube-api-access-pwh2s\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:38 crc kubenswrapper[4802]: I1004 05:07:38.176664 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"654fef6a-ec9f-4ce0-b18c-497d0647cf2e","Type":"ContainerStarted","Data":"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb"} Oct 04 05:07:38 crc kubenswrapper[4802]: I1004 05:07:38.179175 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b8fe-account-create-klqmh" event={"ID":"1acdc363-88c6-46f9-b133-f8999c760804","Type":"ContainerDied","Data":"c67dfce04bd7c26aae47234789a1cc2c78df164cd0160790b23017999b5c0406"} Oct 04 05:07:38 crc kubenswrapper[4802]: I1004 05:07:38.179210 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c67dfce04bd7c26aae47234789a1cc2c78df164cd0160790b23017999b5c0406" Oct 04 05:07:38 crc kubenswrapper[4802]: I1004 05:07:38.179210 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b8fe-account-create-klqmh" Oct 04 05:07:38 crc kubenswrapper[4802]: I1004 05:07:38.181009 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a20f-account-create-6n47p" event={"ID":"4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b","Type":"ContainerDied","Data":"09b04c4aca4bf317bc4f38dc619e49a8d5fb86946eafea9e7d06d4461175916f"} Oct 04 05:07:38 crc kubenswrapper[4802]: I1004 05:07:38.181056 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09b04c4aca4bf317bc4f38dc619e49a8d5fb86946eafea9e7d06d4461175916f" Oct 04 05:07:38 crc kubenswrapper[4802]: I1004 05:07:38.181106 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a20f-account-create-6n47p" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.202358 4802 generic.go:334] "Generic (PLEG): container finished" podID="72327efa-0833-4cf0-bc74-72186ffab61d" containerID="aad0ce4cedcad2607951140a87139e9b6eac1b050f7651e53cc40b494c9496fd" exitCode=0 Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.202789 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8b55b6676-mqh5g" event={"ID":"72327efa-0833-4cf0-bc74-72186ffab61d","Type":"ContainerDied","Data":"aad0ce4cedcad2607951140a87139e9b6eac1b050f7651e53cc40b494c9496fd"} Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.205726 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"654fef6a-ec9f-4ce0-b18c-497d0647cf2e","Type":"ContainerStarted","Data":"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849"} Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.205770 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"654fef6a-ec9f-4ce0-b18c-497d0647cf2e","Type":"ContainerStarted","Data":"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa"} Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.332749 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.466438 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-config\") pod \"72327efa-0833-4cf0-bc74-72186ffab61d\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.466596 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-ovndb-tls-certs\") pod \"72327efa-0833-4cf0-bc74-72186ffab61d\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.467320 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-httpd-config\") pod \"72327efa-0833-4cf0-bc74-72186ffab61d\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.467435 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj7gz\" (UniqueName: \"kubernetes.io/projected/72327efa-0833-4cf0-bc74-72186ffab61d-kube-api-access-dj7gz\") pod \"72327efa-0833-4cf0-bc74-72186ffab61d\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.467490 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-combined-ca-bundle\") pod \"72327efa-0833-4cf0-bc74-72186ffab61d\" (UID: \"72327efa-0833-4cf0-bc74-72186ffab61d\") " Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.487114 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72327efa-0833-4cf0-bc74-72186ffab61d-kube-api-access-dj7gz" (OuterVolumeSpecName: "kube-api-access-dj7gz") pod "72327efa-0833-4cf0-bc74-72186ffab61d" (UID: "72327efa-0833-4cf0-bc74-72186ffab61d"). InnerVolumeSpecName "kube-api-access-dj7gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.489248 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "72327efa-0833-4cf0-bc74-72186ffab61d" (UID: "72327efa-0833-4cf0-bc74-72186ffab61d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.528844 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-config" (OuterVolumeSpecName: "config") pod "72327efa-0833-4cf0-bc74-72186ffab61d" (UID: "72327efa-0833-4cf0-bc74-72186ffab61d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.533364 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72327efa-0833-4cf0-bc74-72186ffab61d" (UID: "72327efa-0833-4cf0-bc74-72186ffab61d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.546188 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "72327efa-0833-4cf0-bc74-72186ffab61d" (UID: "72327efa-0833-4cf0-bc74-72186ffab61d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.570088 4802 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.570129 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj7gz\" (UniqueName: \"kubernetes.io/projected/72327efa-0833-4cf0-bc74-72186ffab61d-kube-api-access-dj7gz\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.570142 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.570152 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.570162 4802 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/72327efa-0833-4cf0-bc74-72186ffab61d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.908721 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nxhb5"] Oct 04 05:07:39 crc kubenswrapper[4802]: E1004 05:07:39.909084 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b" containerName="mariadb-account-create" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.909100 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b" containerName="mariadb-account-create" Oct 04 05:07:39 crc kubenswrapper[4802]: E1004 05:07:39.909108 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72327efa-0833-4cf0-bc74-72186ffab61d" containerName="neutron-httpd" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.909115 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="72327efa-0833-4cf0-bc74-72186ffab61d" containerName="neutron-httpd" Oct 04 05:07:39 crc kubenswrapper[4802]: E1004 05:07:39.909132 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72327efa-0833-4cf0-bc74-72186ffab61d" containerName="neutron-api" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.909139 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="72327efa-0833-4cf0-bc74-72186ffab61d" containerName="neutron-api" Oct 04 05:07:39 crc kubenswrapper[4802]: E1004 05:07:39.909156 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1acdc363-88c6-46f9-b133-f8999c760804" containerName="mariadb-account-create" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.909162 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="1acdc363-88c6-46f9-b133-f8999c760804" containerName="mariadb-account-create" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.909320 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b" containerName="mariadb-account-create" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.909333 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="1acdc363-88c6-46f9-b133-f8999c760804" containerName="mariadb-account-create" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.909343 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="72327efa-0833-4cf0-bc74-72186ffab61d" containerName="neutron-api" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.909355 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="72327efa-0833-4cf0-bc74-72186ffab61d" containerName="neutron-httpd" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.909928 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.913757 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.914226 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.914617 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mrprz" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.931592 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nxhb5"] Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.977299 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-scripts\") pod \"nova-cell0-conductor-db-sync-nxhb5\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.977349 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-config-data\") pod \"nova-cell0-conductor-db-sync-nxhb5\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.977456 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nxhb5\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:39 crc kubenswrapper[4802]: I1004 05:07:39.977480 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hblk\" (UniqueName: \"kubernetes.io/projected/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-kube-api-access-6hblk\") pod \"nova-cell0-conductor-db-sync-nxhb5\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.079101 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-scripts\") pod \"nova-cell0-conductor-db-sync-nxhb5\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.079151 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-config-data\") pod \"nova-cell0-conductor-db-sync-nxhb5\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.079227 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nxhb5\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.079251 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hblk\" (UniqueName: \"kubernetes.io/projected/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-kube-api-access-6hblk\") pod \"nova-cell0-conductor-db-sync-nxhb5\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.082921 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-scripts\") pod \"nova-cell0-conductor-db-sync-nxhb5\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.083114 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-config-data\") pod \"nova-cell0-conductor-db-sync-nxhb5\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.083232 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nxhb5\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.096204 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hblk\" (UniqueName: \"kubernetes.io/projected/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-kube-api-access-6hblk\") pod \"nova-cell0-conductor-db-sync-nxhb5\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.215810 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8b55b6676-mqh5g" event={"ID":"72327efa-0833-4cf0-bc74-72186ffab61d","Type":"ContainerDied","Data":"8d9c734f6f8023e1fc28b3a9f25de5476e5c569893c6fdc03f2df4d408a9fe59"} Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.215868 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8b55b6676-mqh5g" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.215889 4802 scope.go:117] "RemoveContainer" containerID="6e90d1518d93c4004b110815414000891842f834b62f02e3e5dc9ed713252cc7" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.227626 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.231624 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.259600 4802 scope.go:117] "RemoveContainer" containerID="aad0ce4cedcad2607951140a87139e9b6eac1b050f7651e53cc40b494c9496fd" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.261971 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8b55b6676-mqh5g"] Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.271962 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8b55b6676-mqh5g"] Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.381865 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72327efa-0833-4cf0-bc74-72186ffab61d" path="/var/lib/kubelet/pods/72327efa-0833-4cf0-bc74-72186ffab61d/volumes" Oct 04 05:07:40 crc kubenswrapper[4802]: I1004 05:07:40.782590 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nxhb5"] Oct 04 05:07:41 crc kubenswrapper[4802]: I1004 05:07:41.225324 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nxhb5" event={"ID":"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db","Type":"ContainerStarted","Data":"295c254daf6782420a47e3abb725eaa2032ec697c12101ecef3d41aeb8165d4a"} Oct 04 05:07:41 crc kubenswrapper[4802]: I1004 05:07:41.679928 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 04 05:07:42 crc kubenswrapper[4802]: I1004 05:07:42.243208 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"654fef6a-ec9f-4ce0-b18c-497d0647cf2e","Type":"ContainerStarted","Data":"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152"} Oct 04 05:07:43 crc kubenswrapper[4802]: I1004 05:07:43.250471 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="ceilometer-central-agent" containerID="cri-o://95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb" gracePeriod=30 Oct 04 05:07:43 crc kubenswrapper[4802]: I1004 05:07:43.250602 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="ceilometer-notification-agent" containerID="cri-o://a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa" gracePeriod=30 Oct 04 05:07:43 crc kubenswrapper[4802]: I1004 05:07:43.250604 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="proxy-httpd" containerID="cri-o://a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152" gracePeriod=30 Oct 04 05:07:43 crc kubenswrapper[4802]: I1004 05:07:43.250728 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 04 05:07:43 crc kubenswrapper[4802]: I1004 05:07:43.250599 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="sg-core" containerID="cri-o://128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849" gracePeriod=30 Oct 04 05:07:43 crc kubenswrapper[4802]: I1004 05:07:43.273468 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.607192445 podStartE2EDuration="8.27344002s" podCreationTimestamp="2025-10-04 05:07:35 +0000 UTC" firstStartedPulling="2025-10-04 05:07:36.277111575 +0000 UTC m=+1298.685112200" lastFinishedPulling="2025-10-04 05:07:41.94335916 +0000 UTC m=+1304.351359775" observedRunningTime="2025-10-04 05:07:43.269845898 +0000 UTC m=+1305.677846533" watchObservedRunningTime="2025-10-04 05:07:43.27344002 +0000 UTC m=+1305.681440645" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.069292 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.151856 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-sg-core-conf-yaml\") pod \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.151906 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-scripts\") pod \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.151949 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-combined-ca-bundle\") pod \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.151979 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkx8b\" (UniqueName: \"kubernetes.io/projected/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-kube-api-access-lkx8b\") pod \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.152088 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-config-data\") pod \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.152131 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-log-httpd\") pod \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.152159 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-run-httpd\") pod \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\" (UID: \"654fef6a-ec9f-4ce0-b18c-497d0647cf2e\") " Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.153082 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "654fef6a-ec9f-4ce0-b18c-497d0647cf2e" (UID: "654fef6a-ec9f-4ce0-b18c-497d0647cf2e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.153875 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "654fef6a-ec9f-4ce0-b18c-497d0647cf2e" (UID: "654fef6a-ec9f-4ce0-b18c-497d0647cf2e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.157957 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-scripts" (OuterVolumeSpecName: "scripts") pod "654fef6a-ec9f-4ce0-b18c-497d0647cf2e" (UID: "654fef6a-ec9f-4ce0-b18c-497d0647cf2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.158023 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-kube-api-access-lkx8b" (OuterVolumeSpecName: "kube-api-access-lkx8b") pod "654fef6a-ec9f-4ce0-b18c-497d0647cf2e" (UID: "654fef6a-ec9f-4ce0-b18c-497d0647cf2e"). InnerVolumeSpecName "kube-api-access-lkx8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.178419 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "654fef6a-ec9f-4ce0-b18c-497d0647cf2e" (UID: "654fef6a-ec9f-4ce0-b18c-497d0647cf2e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.218153 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "654fef6a-ec9f-4ce0-b18c-497d0647cf2e" (UID: "654fef6a-ec9f-4ce0-b18c-497d0647cf2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.242231 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-config-data" (OuterVolumeSpecName: "config-data") pod "654fef6a-ec9f-4ce0-b18c-497d0647cf2e" (UID: "654fef6a-ec9f-4ce0-b18c-497d0647cf2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.255039 4802 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.255068 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.255079 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.255088 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkx8b\" (UniqueName: \"kubernetes.io/projected/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-kube-api-access-lkx8b\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.255098 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.255105 4802 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.255113 4802 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/654fef6a-ec9f-4ce0-b18c-497d0647cf2e-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.265802 4802 generic.go:334] "Generic (PLEG): container finished" podID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerID="a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152" exitCode=0 Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.265827 4802 generic.go:334] "Generic (PLEG): container finished" podID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerID="128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849" exitCode=2 Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.265854 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.265886 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"654fef6a-ec9f-4ce0-b18c-497d0647cf2e","Type":"ContainerDied","Data":"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152"} Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.265833 4802 generic.go:334] "Generic (PLEG): container finished" podID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerID="a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa" exitCode=0 Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.265944 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"654fef6a-ec9f-4ce0-b18c-497d0647cf2e","Type":"ContainerDied","Data":"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849"} Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.265960 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"654fef6a-ec9f-4ce0-b18c-497d0647cf2e","Type":"ContainerDied","Data":"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa"} Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.265973 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"654fef6a-ec9f-4ce0-b18c-497d0647cf2e","Type":"ContainerDied","Data":"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb"} Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.265948 4802 generic.go:334] "Generic (PLEG): container finished" podID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerID="95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb" exitCode=0 Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.265995 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"654fef6a-ec9f-4ce0-b18c-497d0647cf2e","Type":"ContainerDied","Data":"7cfc9de45bbae9fcd3fca0aaa259f8122a51b65f87170d308cfb9f647437d500"} Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.266010 4802 scope.go:117] "RemoveContainer" containerID="a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.295805 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.302025 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.305136 4802 scope.go:117] "RemoveContainer" containerID="128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.324796 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:44 crc kubenswrapper[4802]: E1004 05:07:44.326113 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="sg-core" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.326133 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="sg-core" Oct 04 05:07:44 crc kubenswrapper[4802]: E1004 05:07:44.326163 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="ceilometer-notification-agent" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.326169 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="ceilometer-notification-agent" Oct 04 05:07:44 crc kubenswrapper[4802]: E1004 05:07:44.326179 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="ceilometer-central-agent" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.326184 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="ceilometer-central-agent" Oct 04 05:07:44 crc kubenswrapper[4802]: E1004 05:07:44.326199 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="proxy-httpd" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.326204 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="proxy-httpd" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.326458 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="proxy-httpd" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.326478 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="sg-core" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.326492 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="ceilometer-central-agent" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.326503 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" containerName="ceilometer-notification-agent" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.328735 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.331594 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.345355 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.346111 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.371995 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="654fef6a-ec9f-4ce0-b18c-497d0647cf2e" path="/var/lib/kubelet/pods/654fef6a-ec9f-4ce0-b18c-497d0647cf2e/volumes" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.460122 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwzfc\" (UniqueName: \"kubernetes.io/projected/1e4bcede-83d2-41fc-b88d-ac0efecde184-kube-api-access-jwzfc\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.460183 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-scripts\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.460204 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e4bcede-83d2-41fc-b88d-ac0efecde184-run-httpd\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.460250 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-config-data\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.460265 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.460360 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.460388 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e4bcede-83d2-41fc-b88d-ac0efecde184-log-httpd\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.551760 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-dbb8-account-create-s4pzt"] Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.559758 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dbb8-account-create-s4pzt" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.562023 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.562679 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.562723 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e4bcede-83d2-41fc-b88d-ac0efecde184-log-httpd\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.562796 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwzfc\" (UniqueName: \"kubernetes.io/projected/1e4bcede-83d2-41fc-b88d-ac0efecde184-kube-api-access-jwzfc\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.562830 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-scripts\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.562850 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e4bcede-83d2-41fc-b88d-ac0efecde184-run-httpd\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.562877 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-config-data\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.562894 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.564488 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e4bcede-83d2-41fc-b88d-ac0efecde184-log-httpd\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.567277 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-scripts\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.567365 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e4bcede-83d2-41fc-b88d-ac0efecde184-run-httpd\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.569115 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.577105 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.582745 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dbb8-account-create-s4pzt"] Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.585204 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-config-data\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.591215 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwzfc\" (UniqueName: \"kubernetes.io/projected/1e4bcede-83d2-41fc-b88d-ac0efecde184-kube-api-access-jwzfc\") pod \"ceilometer-0\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.662066 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.664911 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5rsp\" (UniqueName: \"kubernetes.io/projected/9d4707af-757c-4df5-935f-8a87a4fcde55-kube-api-access-v5rsp\") pod \"nova-api-dbb8-account-create-s4pzt\" (UID: \"9d4707af-757c-4df5-935f-8a87a4fcde55\") " pod="openstack/nova-api-dbb8-account-create-s4pzt" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.767535 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5rsp\" (UniqueName: \"kubernetes.io/projected/9d4707af-757c-4df5-935f-8a87a4fcde55-kube-api-access-v5rsp\") pod \"nova-api-dbb8-account-create-s4pzt\" (UID: \"9d4707af-757c-4df5-935f-8a87a4fcde55\") " pod="openstack/nova-api-dbb8-account-create-s4pzt" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.796313 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5rsp\" (UniqueName: \"kubernetes.io/projected/9d4707af-757c-4df5-935f-8a87a4fcde55-kube-api-access-v5rsp\") pod \"nova-api-dbb8-account-create-s4pzt\" (UID: \"9d4707af-757c-4df5-935f-8a87a4fcde55\") " pod="openstack/nova-api-dbb8-account-create-s4pzt" Oct 04 05:07:44 crc kubenswrapper[4802]: I1004 05:07:44.959433 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dbb8-account-create-s4pzt" Oct 04 05:07:52 crc kubenswrapper[4802]: I1004 05:07:52.662315 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:07:52 crc kubenswrapper[4802]: I1004 05:07:52.662962 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:07:52 crc kubenswrapper[4802]: I1004 05:07:52.663088 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 05:07:52 crc kubenswrapper[4802]: I1004 05:07:52.664110 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c8c1e44715835d6ef2d00db5cee02bc888c676507b5f91dafd169007af48bd8"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 05:07:52 crc kubenswrapper[4802]: I1004 05:07:52.664211 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://2c8c1e44715835d6ef2d00db5cee02bc888c676507b5f91dafd169007af48bd8" gracePeriod=600 Oct 04 05:07:53 crc kubenswrapper[4802]: I1004 05:07:53.351431 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="2c8c1e44715835d6ef2d00db5cee02bc888c676507b5f91dafd169007af48bd8" exitCode=0 Oct 04 05:07:53 crc kubenswrapper[4802]: I1004 05:07:53.351492 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"2c8c1e44715835d6ef2d00db5cee02bc888c676507b5f91dafd169007af48bd8"} Oct 04 05:07:53 crc kubenswrapper[4802]: I1004 05:07:53.958064 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.645180 4802 scope.go:117] "RemoveContainer" containerID="a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa" Oct 04 05:07:55 crc kubenswrapper[4802]: E1004 05:07:55.779797 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Oct 04 05:07:55 crc kubenswrapper[4802]: E1004 05:07:55.780234 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hblk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-nxhb5_openstack(d698cfff-f3cf-46c2-9ff2-f1ac8262d5db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 04 05:07:55 crc kubenswrapper[4802]: E1004 05:07:55.782278 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-nxhb5" podUID="d698cfff-f3cf-46c2-9ff2-f1ac8262d5db" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.807183 4802 scope.go:117] "RemoveContainer" containerID="95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.829839 4802 scope.go:117] "RemoveContainer" containerID="a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152" Oct 04 05:07:55 crc kubenswrapper[4802]: E1004 05:07:55.830344 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152\": container with ID starting with a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152 not found: ID does not exist" containerID="a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.830417 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152"} err="failed to get container status \"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152\": rpc error: code = NotFound desc = could not find container \"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152\": container with ID starting with a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152 not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.830476 4802 scope.go:117] "RemoveContainer" containerID="128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849" Oct 04 05:07:55 crc kubenswrapper[4802]: E1004 05:07:55.830996 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849\": container with ID starting with 128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849 not found: ID does not exist" containerID="128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.831069 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849"} err="failed to get container status \"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849\": rpc error: code = NotFound desc = could not find container \"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849\": container with ID starting with 128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849 not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.831083 4802 scope.go:117] "RemoveContainer" containerID="a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa" Oct 04 05:07:55 crc kubenswrapper[4802]: E1004 05:07:55.831851 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa\": container with ID starting with a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa not found: ID does not exist" containerID="a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.831884 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa"} err="failed to get container status \"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa\": rpc error: code = NotFound desc = could not find container \"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa\": container with ID starting with a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.831907 4802 scope.go:117] "RemoveContainer" containerID="95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb" Oct 04 05:07:55 crc kubenswrapper[4802]: E1004 05:07:55.832600 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb\": container with ID starting with 95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb not found: ID does not exist" containerID="95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.832630 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb"} err="failed to get container status \"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb\": rpc error: code = NotFound desc = could not find container \"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb\": container with ID starting with 95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.832666 4802 scope.go:117] "RemoveContainer" containerID="a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.832988 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152"} err="failed to get container status \"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152\": rpc error: code = NotFound desc = could not find container \"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152\": container with ID starting with a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152 not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.833003 4802 scope.go:117] "RemoveContainer" containerID="128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.833198 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849"} err="failed to get container status \"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849\": rpc error: code = NotFound desc = could not find container \"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849\": container with ID starting with 128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849 not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.833212 4802 scope.go:117] "RemoveContainer" containerID="a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.833464 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa"} err="failed to get container status \"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa\": rpc error: code = NotFound desc = could not find container \"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa\": container with ID starting with a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.833477 4802 scope.go:117] "RemoveContainer" containerID="95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.833681 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb"} err="failed to get container status \"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb\": rpc error: code = NotFound desc = could not find container \"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb\": container with ID starting with 95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.833695 4802 scope.go:117] "RemoveContainer" containerID="a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.833923 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152"} err="failed to get container status \"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152\": rpc error: code = NotFound desc = could not find container \"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152\": container with ID starting with a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152 not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.833937 4802 scope.go:117] "RemoveContainer" containerID="128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.834151 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849"} err="failed to get container status \"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849\": rpc error: code = NotFound desc = could not find container \"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849\": container with ID starting with 128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849 not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.834165 4802 scope.go:117] "RemoveContainer" containerID="a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.834357 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa"} err="failed to get container status \"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa\": rpc error: code = NotFound desc = could not find container \"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa\": container with ID starting with a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.834376 4802 scope.go:117] "RemoveContainer" containerID="95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.834613 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb"} err="failed to get container status \"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb\": rpc error: code = NotFound desc = could not find container \"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb\": container with ID starting with 95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.834631 4802 scope.go:117] "RemoveContainer" containerID="a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.834837 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152"} err="failed to get container status \"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152\": rpc error: code = NotFound desc = could not find container \"a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152\": container with ID starting with a5dfc693da8a8abf6f152a80f1d9382a00f1abb81d292d0bd0e1f3d7640f4152 not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.834850 4802 scope.go:117] "RemoveContainer" containerID="128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.835051 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849"} err="failed to get container status \"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849\": rpc error: code = NotFound desc = could not find container \"128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849\": container with ID starting with 128571e1d827ee1138671a7814740ed6942959398ca8bf9cae3bfef8c9e67849 not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.835066 4802 scope.go:117] "RemoveContainer" containerID="a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.835258 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa"} err="failed to get container status \"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa\": rpc error: code = NotFound desc = could not find container \"a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa\": container with ID starting with a49656ffcdcd0c48cacf36c973b541199c9c8f751e7984e0773391d86e4172fa not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.835272 4802 scope.go:117] "RemoveContainer" containerID="95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.835469 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb"} err="failed to get container status \"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb\": rpc error: code = NotFound desc = could not find container \"95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb\": container with ID starting with 95d755e17f386e89e9a7d39ab648d7f620590c61b1fad2b50824e07532494adb not found: ID does not exist" Oct 04 05:07:55 crc kubenswrapper[4802]: I1004 05:07:55.835492 4802 scope.go:117] "RemoveContainer" containerID="be240c6f7c9da0768b330ef7604de12df37604afd0ee9a212f9d7f4a15105260" Oct 04 05:07:56 crc kubenswrapper[4802]: I1004 05:07:56.122961 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dbb8-account-create-s4pzt"] Oct 04 05:07:56 crc kubenswrapper[4802]: W1004 05:07:56.132773 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d4707af_757c_4df5_935f_8a87a4fcde55.slice/crio-6ba2c4fd709b68410f3f9271ad7a5538dfaf1b9ffefb3d18c2e5e97bcae40500 WatchSource:0}: Error finding container 6ba2c4fd709b68410f3f9271ad7a5538dfaf1b9ffefb3d18c2e5e97bcae40500: Status 404 returned error can't find the container with id 6ba2c4fd709b68410f3f9271ad7a5538dfaf1b9ffefb3d18c2e5e97bcae40500 Oct 04 05:07:56 crc kubenswrapper[4802]: I1004 05:07:56.140419 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:07:56 crc kubenswrapper[4802]: I1004 05:07:56.378686 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"5b26301f92c6ff409155d12712a68269dd9751a178e6afc83d2a6f8069fd1f8e"} Oct 04 05:07:56 crc kubenswrapper[4802]: I1004 05:07:56.381397 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e4bcede-83d2-41fc-b88d-ac0efecde184","Type":"ContainerStarted","Data":"d86201094469be48cbd53bb5b74ae7542055bfcb9dae7f29a1a2d9505c985246"} Oct 04 05:07:56 crc kubenswrapper[4802]: I1004 05:07:56.383269 4802 generic.go:334] "Generic (PLEG): container finished" podID="9d4707af-757c-4df5-935f-8a87a4fcde55" containerID="64980ba67382b5c542fc102c6e9f4238a810dc6fc41531cb4cfdf5f6c49d21d8" exitCode=0 Oct 04 05:07:56 crc kubenswrapper[4802]: I1004 05:07:56.383348 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dbb8-account-create-s4pzt" event={"ID":"9d4707af-757c-4df5-935f-8a87a4fcde55","Type":"ContainerDied","Data":"64980ba67382b5c542fc102c6e9f4238a810dc6fc41531cb4cfdf5f6c49d21d8"} Oct 04 05:07:56 crc kubenswrapper[4802]: I1004 05:07:56.383373 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dbb8-account-create-s4pzt" event={"ID":"9d4707af-757c-4df5-935f-8a87a4fcde55","Type":"ContainerStarted","Data":"6ba2c4fd709b68410f3f9271ad7a5538dfaf1b9ffefb3d18c2e5e97bcae40500"} Oct 04 05:07:56 crc kubenswrapper[4802]: E1004 05:07:56.386862 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-nxhb5" podUID="d698cfff-f3cf-46c2-9ff2-f1ac8262d5db" Oct 04 05:07:57 crc kubenswrapper[4802]: I1004 05:07:57.410275 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e4bcede-83d2-41fc-b88d-ac0efecde184","Type":"ContainerStarted","Data":"7affb10efaaf0d7914bc2c77725f4612b7f746421b5d45971679a6accd2673a8"} Oct 04 05:07:57 crc kubenswrapper[4802]: I1004 05:07:57.749806 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dbb8-account-create-s4pzt" Oct 04 05:07:57 crc kubenswrapper[4802]: I1004 05:07:57.822893 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5rsp\" (UniqueName: \"kubernetes.io/projected/9d4707af-757c-4df5-935f-8a87a4fcde55-kube-api-access-v5rsp\") pod \"9d4707af-757c-4df5-935f-8a87a4fcde55\" (UID: \"9d4707af-757c-4df5-935f-8a87a4fcde55\") " Oct 04 05:07:57 crc kubenswrapper[4802]: I1004 05:07:57.828581 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4707af-757c-4df5-935f-8a87a4fcde55-kube-api-access-v5rsp" (OuterVolumeSpecName: "kube-api-access-v5rsp") pod "9d4707af-757c-4df5-935f-8a87a4fcde55" (UID: "9d4707af-757c-4df5-935f-8a87a4fcde55"). InnerVolumeSpecName "kube-api-access-v5rsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:07:57 crc kubenswrapper[4802]: I1004 05:07:57.925308 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5rsp\" (UniqueName: \"kubernetes.io/projected/9d4707af-757c-4df5-935f-8a87a4fcde55-kube-api-access-v5rsp\") on node \"crc\" DevicePath \"\"" Oct 04 05:07:58 crc kubenswrapper[4802]: I1004 05:07:58.422204 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e4bcede-83d2-41fc-b88d-ac0efecde184","Type":"ContainerStarted","Data":"7b5d4e007112bd805ae647fc775ac4fddf5e7f6f2f27c9549aca05deee9eb7ba"} Oct 04 05:07:58 crc kubenswrapper[4802]: I1004 05:07:58.424470 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dbb8-account-create-s4pzt" event={"ID":"9d4707af-757c-4df5-935f-8a87a4fcde55","Type":"ContainerDied","Data":"6ba2c4fd709b68410f3f9271ad7a5538dfaf1b9ffefb3d18c2e5e97bcae40500"} Oct 04 05:07:58 crc kubenswrapper[4802]: I1004 05:07:58.424507 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ba2c4fd709b68410f3f9271ad7a5538dfaf1b9ffefb3d18c2e5e97bcae40500" Oct 04 05:07:58 crc kubenswrapper[4802]: I1004 05:07:58.424572 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dbb8-account-create-s4pzt" Oct 04 05:07:59 crc kubenswrapper[4802]: I1004 05:07:59.436483 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e4bcede-83d2-41fc-b88d-ac0efecde184","Type":"ContainerStarted","Data":"4b7267f2bca8b195fffec5526331bbff2cdcf925725567edabe146ad845b1efb"} Oct 04 05:08:00 crc kubenswrapper[4802]: I1004 05:08:00.449811 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e4bcede-83d2-41fc-b88d-ac0efecde184","Type":"ContainerStarted","Data":"7e25bdd9e2f24bb9e30e93c8204a5f238f20db977eb7293607919397a99d089b"} Oct 04 05:08:00 crc kubenswrapper[4802]: I1004 05:08:00.450393 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 04 05:08:00 crc kubenswrapper[4802]: I1004 05:08:00.450277 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="proxy-httpd" containerID="cri-o://7e25bdd9e2f24bb9e30e93c8204a5f238f20db977eb7293607919397a99d089b" gracePeriod=30 Oct 04 05:08:00 crc kubenswrapper[4802]: I1004 05:08:00.449984 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="ceilometer-central-agent" containerID="cri-o://7affb10efaaf0d7914bc2c77725f4612b7f746421b5d45971679a6accd2673a8" gracePeriod=30 Oct 04 05:08:00 crc kubenswrapper[4802]: I1004 05:08:00.450290 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="sg-core" containerID="cri-o://4b7267f2bca8b195fffec5526331bbff2cdcf925725567edabe146ad845b1efb" gracePeriod=30 Oct 04 05:08:00 crc kubenswrapper[4802]: I1004 05:08:00.450301 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="ceilometer-notification-agent" containerID="cri-o://7b5d4e007112bd805ae647fc775ac4fddf5e7f6f2f27c9549aca05deee9eb7ba" gracePeriod=30 Oct 04 05:08:00 crc kubenswrapper[4802]: I1004 05:08:00.475567 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=13.178870208 podStartE2EDuration="16.475547046s" podCreationTimestamp="2025-10-04 05:07:44 +0000 UTC" firstStartedPulling="2025-10-04 05:07:56.152010717 +0000 UTC m=+1318.560011342" lastFinishedPulling="2025-10-04 05:07:59.448687555 +0000 UTC m=+1321.856688180" observedRunningTime="2025-10-04 05:08:00.471542792 +0000 UTC m=+1322.879543437" watchObservedRunningTime="2025-10-04 05:08:00.475547046 +0000 UTC m=+1322.883547671" Oct 04 05:08:01 crc kubenswrapper[4802]: I1004 05:08:01.464898 4802 generic.go:334] "Generic (PLEG): container finished" podID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerID="7e25bdd9e2f24bb9e30e93c8204a5f238f20db977eb7293607919397a99d089b" exitCode=0 Oct 04 05:08:01 crc kubenswrapper[4802]: I1004 05:08:01.465359 4802 generic.go:334] "Generic (PLEG): container finished" podID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerID="4b7267f2bca8b195fffec5526331bbff2cdcf925725567edabe146ad845b1efb" exitCode=2 Oct 04 05:08:01 crc kubenswrapper[4802]: I1004 05:08:01.465370 4802 generic.go:334] "Generic (PLEG): container finished" podID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerID="7b5d4e007112bd805ae647fc775ac4fddf5e7f6f2f27c9549aca05deee9eb7ba" exitCode=0 Oct 04 05:08:01 crc kubenswrapper[4802]: I1004 05:08:01.464990 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e4bcede-83d2-41fc-b88d-ac0efecde184","Type":"ContainerDied","Data":"7e25bdd9e2f24bb9e30e93c8204a5f238f20db977eb7293607919397a99d089b"} Oct 04 05:08:01 crc kubenswrapper[4802]: I1004 05:08:01.465425 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e4bcede-83d2-41fc-b88d-ac0efecde184","Type":"ContainerDied","Data":"4b7267f2bca8b195fffec5526331bbff2cdcf925725567edabe146ad845b1efb"} Oct 04 05:08:01 crc kubenswrapper[4802]: I1004 05:08:01.465443 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e4bcede-83d2-41fc-b88d-ac0efecde184","Type":"ContainerDied","Data":"7b5d4e007112bd805ae647fc775ac4fddf5e7f6f2f27c9549aca05deee9eb7ba"} Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.486831 4802 generic.go:334] "Generic (PLEG): container finished" podID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerID="7affb10efaaf0d7914bc2c77725f4612b7f746421b5d45971679a6accd2673a8" exitCode=0 Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.486904 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e4bcede-83d2-41fc-b88d-ac0efecde184","Type":"ContainerDied","Data":"7affb10efaaf0d7914bc2c77725f4612b7f746421b5d45971679a6accd2673a8"} Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.603823 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.647194 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e4bcede-83d2-41fc-b88d-ac0efecde184-log-httpd\") pod \"1e4bcede-83d2-41fc-b88d-ac0efecde184\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.647246 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e4bcede-83d2-41fc-b88d-ac0efecde184-run-httpd\") pod \"1e4bcede-83d2-41fc-b88d-ac0efecde184\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.647394 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-combined-ca-bundle\") pod \"1e4bcede-83d2-41fc-b88d-ac0efecde184\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.647474 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-scripts\") pod \"1e4bcede-83d2-41fc-b88d-ac0efecde184\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.647517 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-config-data\") pod \"1e4bcede-83d2-41fc-b88d-ac0efecde184\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.647620 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwzfc\" (UniqueName: \"kubernetes.io/projected/1e4bcede-83d2-41fc-b88d-ac0efecde184-kube-api-access-jwzfc\") pod \"1e4bcede-83d2-41fc-b88d-ac0efecde184\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.647771 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-sg-core-conf-yaml\") pod \"1e4bcede-83d2-41fc-b88d-ac0efecde184\" (UID: \"1e4bcede-83d2-41fc-b88d-ac0efecde184\") " Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.647904 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e4bcede-83d2-41fc-b88d-ac0efecde184-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1e4bcede-83d2-41fc-b88d-ac0efecde184" (UID: "1e4bcede-83d2-41fc-b88d-ac0efecde184"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.648534 4802 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e4bcede-83d2-41fc-b88d-ac0efecde184-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.654091 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e4bcede-83d2-41fc-b88d-ac0efecde184-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1e4bcede-83d2-41fc-b88d-ac0efecde184" (UID: "1e4bcede-83d2-41fc-b88d-ac0efecde184"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.654919 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-scripts" (OuterVolumeSpecName: "scripts") pod "1e4bcede-83d2-41fc-b88d-ac0efecde184" (UID: "1e4bcede-83d2-41fc-b88d-ac0efecde184"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.659928 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e4bcede-83d2-41fc-b88d-ac0efecde184-kube-api-access-jwzfc" (OuterVolumeSpecName: "kube-api-access-jwzfc") pod "1e4bcede-83d2-41fc-b88d-ac0efecde184" (UID: "1e4bcede-83d2-41fc-b88d-ac0efecde184"). InnerVolumeSpecName "kube-api-access-jwzfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.680520 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1e4bcede-83d2-41fc-b88d-ac0efecde184" (UID: "1e4bcede-83d2-41fc-b88d-ac0efecde184"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.732067 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e4bcede-83d2-41fc-b88d-ac0efecde184" (UID: "1e4bcede-83d2-41fc-b88d-ac0efecde184"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.746107 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-config-data" (OuterVolumeSpecName: "config-data") pod "1e4bcede-83d2-41fc-b88d-ac0efecde184" (UID: "1e4bcede-83d2-41fc-b88d-ac0efecde184"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.750191 4802 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e4bcede-83d2-41fc-b88d-ac0efecde184-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.750227 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.750239 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.750248 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.750258 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwzfc\" (UniqueName: \"kubernetes.io/projected/1e4bcede-83d2-41fc-b88d-ac0efecde184-kube-api-access-jwzfc\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:03 crc kubenswrapper[4802]: I1004 05:08:03.750270 4802 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e4bcede-83d2-41fc-b88d-ac0efecde184-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.499633 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e4bcede-83d2-41fc-b88d-ac0efecde184","Type":"ContainerDied","Data":"d86201094469be48cbd53bb5b74ae7542055bfcb9dae7f29a1a2d9505c985246"} Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.499721 4802 scope.go:117] "RemoveContainer" containerID="7e25bdd9e2f24bb9e30e93c8204a5f238f20db977eb7293607919397a99d089b" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.499767 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.526067 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.531678 4802 scope.go:117] "RemoveContainer" containerID="4b7267f2bca8b195fffec5526331bbff2cdcf925725567edabe146ad845b1efb" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.539433 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.561874 4802 scope.go:117] "RemoveContainer" containerID="7b5d4e007112bd805ae647fc775ac4fddf5e7f6f2f27c9549aca05deee9eb7ba" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.585959 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:08:04 crc kubenswrapper[4802]: E1004 05:08:04.591308 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4707af-757c-4df5-935f-8a87a4fcde55" containerName="mariadb-account-create" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.591397 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4707af-757c-4df5-935f-8a87a4fcde55" containerName="mariadb-account-create" Oct 04 05:08:04 crc kubenswrapper[4802]: E1004 05:08:04.591678 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="ceilometer-central-agent" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.591728 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="ceilometer-central-agent" Oct 04 05:08:04 crc kubenswrapper[4802]: E1004 05:08:04.591803 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="sg-core" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.592745 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="sg-core" Oct 04 05:08:04 crc kubenswrapper[4802]: E1004 05:08:04.592786 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="ceilometer-notification-agent" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.592794 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="ceilometer-notification-agent" Oct 04 05:08:04 crc kubenswrapper[4802]: E1004 05:08:04.592809 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="proxy-httpd" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.593192 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="proxy-httpd" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.593974 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="ceilometer-notification-agent" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.594036 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="sg-core" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.594070 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="ceilometer-central-agent" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.594094 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" containerName="proxy-httpd" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.594136 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4707af-757c-4df5-935f-8a87a4fcde55" containerName="mariadb-account-create" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.596744 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.600173 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.600688 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.600895 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.605891 4802 scope.go:117] "RemoveContainer" containerID="7affb10efaaf0d7914bc2c77725f4612b7f746421b5d45971679a6accd2673a8" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.672278 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-config-data\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.672336 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-scripts\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.672355 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1ba85ff-3646-4a98-880d-460eaf109ead-run-httpd\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.672394 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.672423 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg8wd\" (UniqueName: \"kubernetes.io/projected/e1ba85ff-3646-4a98-880d-460eaf109ead-kube-api-access-tg8wd\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.672446 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1ba85ff-3646-4a98-880d-460eaf109ead-log-httpd\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.672487 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.774209 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-config-data\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.774258 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-scripts\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.774281 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1ba85ff-3646-4a98-880d-460eaf109ead-run-httpd\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.774323 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.774355 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg8wd\" (UniqueName: \"kubernetes.io/projected/e1ba85ff-3646-4a98-880d-460eaf109ead-kube-api-access-tg8wd\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.774385 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1ba85ff-3646-4a98-880d-460eaf109ead-log-httpd\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.774437 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.775874 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1ba85ff-3646-4a98-880d-460eaf109ead-run-httpd\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.776760 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1ba85ff-3646-4a98-880d-460eaf109ead-log-httpd\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.780261 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-scripts\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.780772 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-config-data\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.781250 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.789928 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.792933 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg8wd\" (UniqueName: \"kubernetes.io/projected/e1ba85ff-3646-4a98-880d-460eaf109ead-kube-api-access-tg8wd\") pod \"ceilometer-0\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " pod="openstack/ceilometer-0" Oct 04 05:08:04 crc kubenswrapper[4802]: I1004 05:08:04.922705 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:08:05 crc kubenswrapper[4802]: I1004 05:08:05.362603 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:08:05 crc kubenswrapper[4802]: I1004 05:08:05.508396 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1ba85ff-3646-4a98-880d-460eaf109ead","Type":"ContainerStarted","Data":"679d1ff892c1b162c2a4e50620ed2a689da871e46b25bc2f8203c9dd108b4c8f"} Oct 04 05:08:06 crc kubenswrapper[4802]: I1004 05:08:06.369991 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e4bcede-83d2-41fc-b88d-ac0efecde184" path="/var/lib/kubelet/pods/1e4bcede-83d2-41fc-b88d-ac0efecde184/volumes" Oct 04 05:08:07 crc kubenswrapper[4802]: I1004 05:08:07.528073 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1ba85ff-3646-4a98-880d-460eaf109ead","Type":"ContainerStarted","Data":"7e09ee2202022c21290cc13a04dcffa4595e532706a3c62b894422b1371e86b3"} Oct 04 05:08:08 crc kubenswrapper[4802]: I1004 05:08:08.538028 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1ba85ff-3646-4a98-880d-460eaf109ead","Type":"ContainerStarted","Data":"e22fcee277359931e1645fa4058832549fd547f6ff5bee1e5f7e741b63fa4a1c"} Oct 04 05:08:08 crc kubenswrapper[4802]: I1004 05:08:08.538429 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1ba85ff-3646-4a98-880d-460eaf109ead","Type":"ContainerStarted","Data":"dccace3a4b152212113da5966dc46ec37b68bd19f26feb336c82f799d0dd81b9"} Oct 04 05:08:10 crc kubenswrapper[4802]: I1004 05:08:10.560110 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1ba85ff-3646-4a98-880d-460eaf109ead","Type":"ContainerStarted","Data":"5bbdd640ea90cbb210219c9a2bfc6481471cc1714ee56f7bdcacd9640b29ee72"} Oct 04 05:08:10 crc kubenswrapper[4802]: I1004 05:08:10.560858 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 04 05:08:10 crc kubenswrapper[4802]: I1004 05:08:10.562837 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nxhb5" event={"ID":"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db","Type":"ContainerStarted","Data":"bcb4898118c3a26ed76cfd65622fa646515cf666f351c96d8e12e5848c0aa9bb"} Oct 04 05:08:10 crc kubenswrapper[4802]: I1004 05:08:10.590857 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.371953489 podStartE2EDuration="6.590834543s" podCreationTimestamp="2025-10-04 05:08:04 +0000 UTC" firstStartedPulling="2025-10-04 05:08:05.368431229 +0000 UTC m=+1327.776431844" lastFinishedPulling="2025-10-04 05:08:09.587312273 +0000 UTC m=+1331.995312898" observedRunningTime="2025-10-04 05:08:10.580694836 +0000 UTC m=+1332.988695481" watchObservedRunningTime="2025-10-04 05:08:10.590834543 +0000 UTC m=+1332.998835168" Oct 04 05:08:10 crc kubenswrapper[4802]: I1004 05:08:10.605379 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-nxhb5" podStartSLOduration=2.48363801 podStartE2EDuration="31.605361765s" podCreationTimestamp="2025-10-04 05:07:39 +0000 UTC" firstStartedPulling="2025-10-04 05:07:40.787605788 +0000 UTC m=+1303.195606413" lastFinishedPulling="2025-10-04 05:08:09.909329543 +0000 UTC m=+1332.317330168" observedRunningTime="2025-10-04 05:08:10.59671179 +0000 UTC m=+1333.004712415" watchObservedRunningTime="2025-10-04 05:08:10.605361765 +0000 UTC m=+1333.013362390" Oct 04 05:08:34 crc kubenswrapper[4802]: I1004 05:08:34.926960 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 04 05:08:35 crc kubenswrapper[4802]: I1004 05:08:35.756015 4802 generic.go:334] "Generic (PLEG): container finished" podID="d698cfff-f3cf-46c2-9ff2-f1ac8262d5db" containerID="bcb4898118c3a26ed76cfd65622fa646515cf666f351c96d8e12e5848c0aa9bb" exitCode=0 Oct 04 05:08:35 crc kubenswrapper[4802]: I1004 05:08:35.756043 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nxhb5" event={"ID":"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db","Type":"ContainerDied","Data":"bcb4898118c3a26ed76cfd65622fa646515cf666f351c96d8e12e5848c0aa9bb"} Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.146922 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.276887 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-config-data\") pod \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.277190 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-scripts\") pod \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.277313 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hblk\" (UniqueName: \"kubernetes.io/projected/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-kube-api-access-6hblk\") pod \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.277501 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-combined-ca-bundle\") pod \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\" (UID: \"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db\") " Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.284810 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-scripts" (OuterVolumeSpecName: "scripts") pod "d698cfff-f3cf-46c2-9ff2-f1ac8262d5db" (UID: "d698cfff-f3cf-46c2-9ff2-f1ac8262d5db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.284869 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-kube-api-access-6hblk" (OuterVolumeSpecName: "kube-api-access-6hblk") pod "d698cfff-f3cf-46c2-9ff2-f1ac8262d5db" (UID: "d698cfff-f3cf-46c2-9ff2-f1ac8262d5db"). InnerVolumeSpecName "kube-api-access-6hblk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.301846 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d698cfff-f3cf-46c2-9ff2-f1ac8262d5db" (UID: "d698cfff-f3cf-46c2-9ff2-f1ac8262d5db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.309534 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-config-data" (OuterVolumeSpecName: "config-data") pod "d698cfff-f3cf-46c2-9ff2-f1ac8262d5db" (UID: "d698cfff-f3cf-46c2-9ff2-f1ac8262d5db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.380059 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.380669 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.380696 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.380714 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hblk\" (UniqueName: \"kubernetes.io/projected/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db-kube-api-access-6hblk\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.521905 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.522140 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9" containerName="kube-state-metrics" containerID="cri-o://0049e92994981693b2d762ab30dc7eb17bb975cb34128319e0cedd0949a81d3e" gracePeriod=30 Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.773836 4802 generic.go:334] "Generic (PLEG): container finished" podID="0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9" containerID="0049e92994981693b2d762ab30dc7eb17bb975cb34128319e0cedd0949a81d3e" exitCode=2 Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.773899 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9","Type":"ContainerDied","Data":"0049e92994981693b2d762ab30dc7eb17bb975cb34128319e0cedd0949a81d3e"} Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.775050 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nxhb5" event={"ID":"d698cfff-f3cf-46c2-9ff2-f1ac8262d5db","Type":"ContainerDied","Data":"295c254daf6782420a47e3abb725eaa2032ec697c12101ecef3d41aeb8165d4a"} Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.775085 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="295c254daf6782420a47e3abb725eaa2032ec697c12101ecef3d41aeb8165d4a" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.775140 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nxhb5" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.879915 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 04 05:08:37 crc kubenswrapper[4802]: E1004 05:08:37.880404 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d698cfff-f3cf-46c2-9ff2-f1ac8262d5db" containerName="nova-cell0-conductor-db-sync" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.880426 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="d698cfff-f3cf-46c2-9ff2-f1ac8262d5db" containerName="nova-cell0-conductor-db-sync" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.880684 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="d698cfff-f3cf-46c2-9ff2-f1ac8262d5db" containerName="nova-cell0-conductor-db-sync" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.881264 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.883761 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mrprz" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.884192 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.912595 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.966407 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.993984 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqtdl\" (UniqueName: \"kubernetes.io/projected/0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9-kube-api-access-cqtdl\") pod \"0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9\" (UID: \"0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9\") " Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.994367 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c6a991-4c66-48fc-b4a4-8da5d55279c7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76c6a991-4c66-48fc-b4a4-8da5d55279c7\") " pod="openstack/nova-cell0-conductor-0" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.994587 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbpj8\" (UniqueName: \"kubernetes.io/projected/76c6a991-4c66-48fc-b4a4-8da5d55279c7-kube-api-access-nbpj8\") pod \"nova-cell0-conductor-0\" (UID: \"76c6a991-4c66-48fc-b4a4-8da5d55279c7\") " pod="openstack/nova-cell0-conductor-0" Oct 04 05:08:37 crc kubenswrapper[4802]: I1004 05:08:37.994676 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c6a991-4c66-48fc-b4a4-8da5d55279c7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76c6a991-4c66-48fc-b4a4-8da5d55279c7\") " pod="openstack/nova-cell0-conductor-0" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.000017 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9-kube-api-access-cqtdl" (OuterVolumeSpecName: "kube-api-access-cqtdl") pod "0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9" (UID: "0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9"). InnerVolumeSpecName "kube-api-access-cqtdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.095995 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbpj8\" (UniqueName: \"kubernetes.io/projected/76c6a991-4c66-48fc-b4a4-8da5d55279c7-kube-api-access-nbpj8\") pod \"nova-cell0-conductor-0\" (UID: \"76c6a991-4c66-48fc-b4a4-8da5d55279c7\") " pod="openstack/nova-cell0-conductor-0" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.096037 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c6a991-4c66-48fc-b4a4-8da5d55279c7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76c6a991-4c66-48fc-b4a4-8da5d55279c7\") " pod="openstack/nova-cell0-conductor-0" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.096076 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c6a991-4c66-48fc-b4a4-8da5d55279c7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76c6a991-4c66-48fc-b4a4-8da5d55279c7\") " pod="openstack/nova-cell0-conductor-0" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.096131 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqtdl\" (UniqueName: \"kubernetes.io/projected/0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9-kube-api-access-cqtdl\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.099978 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c6a991-4c66-48fc-b4a4-8da5d55279c7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76c6a991-4c66-48fc-b4a4-8da5d55279c7\") " pod="openstack/nova-cell0-conductor-0" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.100006 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c6a991-4c66-48fc-b4a4-8da5d55279c7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76c6a991-4c66-48fc-b4a4-8da5d55279c7\") " pod="openstack/nova-cell0-conductor-0" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.115151 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbpj8\" (UniqueName: \"kubernetes.io/projected/76c6a991-4c66-48fc-b4a4-8da5d55279c7-kube-api-access-nbpj8\") pod \"nova-cell0-conductor-0\" (UID: \"76c6a991-4c66-48fc-b4a4-8da5d55279c7\") " pod="openstack/nova-cell0-conductor-0" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.215401 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.587276 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.587603 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="ceilometer-central-agent" containerID="cri-o://7e09ee2202022c21290cc13a04dcffa4595e532706a3c62b894422b1371e86b3" gracePeriod=30 Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.587737 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="proxy-httpd" containerID="cri-o://5bbdd640ea90cbb210219c9a2bfc6481471cc1714ee56f7bdcacd9640b29ee72" gracePeriod=30 Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.587777 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="sg-core" containerID="cri-o://e22fcee277359931e1645fa4058832549fd547f6ff5bee1e5f7e741b63fa4a1c" gracePeriod=30 Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.587812 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="ceilometer-notification-agent" containerID="cri-o://dccace3a4b152212113da5966dc46ec37b68bd19f26feb336c82f799d0dd81b9" gracePeriod=30 Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.650009 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.786350 4802 generic.go:334] "Generic (PLEG): container finished" podID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerID="e22fcee277359931e1645fa4058832549fd547f6ff5bee1e5f7e741b63fa4a1c" exitCode=2 Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.786421 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1ba85ff-3646-4a98-880d-460eaf109ead","Type":"ContainerDied","Data":"e22fcee277359931e1645fa4058832549fd547f6ff5bee1e5f7e741b63fa4a1c"} Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.787984 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"76c6a991-4c66-48fc-b4a4-8da5d55279c7","Type":"ContainerStarted","Data":"34071b7b2ab0f2aeec14aad5cc5bf9995946ca0c021f377e08662773e3b42803"} Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.790136 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9","Type":"ContainerDied","Data":"7d42a0ea910387e1149a715c11b55b3266347765d38d77995029bd052a364f02"} Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.790207 4802 scope.go:117] "RemoveContainer" containerID="0049e92994981693b2d762ab30dc7eb17bb975cb34128319e0cedd0949a81d3e" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.790318 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.822249 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.832155 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.839673 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 04 05:08:38 crc kubenswrapper[4802]: E1004 05:08:38.840010 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9" containerName="kube-state-metrics" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.840026 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9" containerName="kube-state-metrics" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.840209 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9" containerName="kube-state-metrics" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.840734 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.842397 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.843351 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.850103 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.918661 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tjnf\" (UniqueName: \"kubernetes.io/projected/92739001-1a5b-465c-a6f7-728d00aeadfd-kube-api-access-7tjnf\") pod \"kube-state-metrics-0\" (UID: \"92739001-1a5b-465c-a6f7-728d00aeadfd\") " pod="openstack/kube-state-metrics-0" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.918717 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/92739001-1a5b-465c-a6f7-728d00aeadfd-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"92739001-1a5b-465c-a6f7-728d00aeadfd\") " pod="openstack/kube-state-metrics-0" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.919218 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92739001-1a5b-465c-a6f7-728d00aeadfd-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"92739001-1a5b-465c-a6f7-728d00aeadfd\") " pod="openstack/kube-state-metrics-0" Oct 04 05:08:38 crc kubenswrapper[4802]: I1004 05:08:38.919313 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/92739001-1a5b-465c-a6f7-728d00aeadfd-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"92739001-1a5b-465c-a6f7-728d00aeadfd\") " pod="openstack/kube-state-metrics-0" Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.021285 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/92739001-1a5b-465c-a6f7-728d00aeadfd-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"92739001-1a5b-465c-a6f7-728d00aeadfd\") " pod="openstack/kube-state-metrics-0" Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.021858 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92739001-1a5b-465c-a6f7-728d00aeadfd-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"92739001-1a5b-465c-a6f7-728d00aeadfd\") " pod="openstack/kube-state-metrics-0" Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.022220 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/92739001-1a5b-465c-a6f7-728d00aeadfd-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"92739001-1a5b-465c-a6f7-728d00aeadfd\") " pod="openstack/kube-state-metrics-0" Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.022280 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tjnf\" (UniqueName: \"kubernetes.io/projected/92739001-1a5b-465c-a6f7-728d00aeadfd-kube-api-access-7tjnf\") pod \"kube-state-metrics-0\" (UID: \"92739001-1a5b-465c-a6f7-728d00aeadfd\") " pod="openstack/kube-state-metrics-0" Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.027945 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/92739001-1a5b-465c-a6f7-728d00aeadfd-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"92739001-1a5b-465c-a6f7-728d00aeadfd\") " pod="openstack/kube-state-metrics-0" Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.028055 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/92739001-1a5b-465c-a6f7-728d00aeadfd-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"92739001-1a5b-465c-a6f7-728d00aeadfd\") " pod="openstack/kube-state-metrics-0" Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.028689 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92739001-1a5b-465c-a6f7-728d00aeadfd-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"92739001-1a5b-465c-a6f7-728d00aeadfd\") " pod="openstack/kube-state-metrics-0" Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.041466 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tjnf\" (UniqueName: \"kubernetes.io/projected/92739001-1a5b-465c-a6f7-728d00aeadfd-kube-api-access-7tjnf\") pod \"kube-state-metrics-0\" (UID: \"92739001-1a5b-465c-a6f7-728d00aeadfd\") " pod="openstack/kube-state-metrics-0" Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.175461 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.625307 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 04 05:08:39 crc kubenswrapper[4802]: W1004 05:08:39.639654 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92739001_1a5b_465c_a6f7_728d00aeadfd.slice/crio-f42fbc78343a0adbb4e8b97fd38bb3782049519dff357814a5a30a410f447a1a WatchSource:0}: Error finding container f42fbc78343a0adbb4e8b97fd38bb3782049519dff357814a5a30a410f447a1a: Status 404 returned error can't find the container with id f42fbc78343a0adbb4e8b97fd38bb3782049519dff357814a5a30a410f447a1a Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.805859 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"92739001-1a5b-465c-a6f7-728d00aeadfd","Type":"ContainerStarted","Data":"f42fbc78343a0adbb4e8b97fd38bb3782049519dff357814a5a30a410f447a1a"} Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.817149 4802 generic.go:334] "Generic (PLEG): container finished" podID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerID="5bbdd640ea90cbb210219c9a2bfc6481471cc1714ee56f7bdcacd9640b29ee72" exitCode=0 Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.817183 4802 generic.go:334] "Generic (PLEG): container finished" podID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerID="dccace3a4b152212113da5966dc46ec37b68bd19f26feb336c82f799d0dd81b9" exitCode=0 Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.817193 4802 generic.go:334] "Generic (PLEG): container finished" podID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerID="7e09ee2202022c21290cc13a04dcffa4595e532706a3c62b894422b1371e86b3" exitCode=0 Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.817238 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1ba85ff-3646-4a98-880d-460eaf109ead","Type":"ContainerDied","Data":"5bbdd640ea90cbb210219c9a2bfc6481471cc1714ee56f7bdcacd9640b29ee72"} Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.817297 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1ba85ff-3646-4a98-880d-460eaf109ead","Type":"ContainerDied","Data":"dccace3a4b152212113da5966dc46ec37b68bd19f26feb336c82f799d0dd81b9"} Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.817313 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1ba85ff-3646-4a98-880d-460eaf109ead","Type":"ContainerDied","Data":"7e09ee2202022c21290cc13a04dcffa4595e532706a3c62b894422b1371e86b3"} Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.820728 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"76c6a991-4c66-48fc-b4a4-8da5d55279c7","Type":"ContainerStarted","Data":"88a52a5688b55bd0709a2cdd14308dee63ec62608a1ae63e7d08e1eb3f4df33e"} Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.820882 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 04 05:08:39 crc kubenswrapper[4802]: I1004 05:08:39.842363 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.842277143 podStartE2EDuration="2.842277143s" podCreationTimestamp="2025-10-04 05:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:08:39.83404861 +0000 UTC m=+1362.242049235" watchObservedRunningTime="2025-10-04 05:08:39.842277143 +0000 UTC m=+1362.250277768" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.175830 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.239248 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1ba85ff-3646-4a98-880d-460eaf109ead-run-httpd\") pod \"e1ba85ff-3646-4a98-880d-460eaf109ead\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.239305 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1ba85ff-3646-4a98-880d-460eaf109ead-log-httpd\") pod \"e1ba85ff-3646-4a98-880d-460eaf109ead\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.239354 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-sg-core-conf-yaml\") pod \"e1ba85ff-3646-4a98-880d-460eaf109ead\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.239446 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-combined-ca-bundle\") pod \"e1ba85ff-3646-4a98-880d-460eaf109ead\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.239506 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-scripts\") pod \"e1ba85ff-3646-4a98-880d-460eaf109ead\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.239613 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1ba85ff-3646-4a98-880d-460eaf109ead-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e1ba85ff-3646-4a98-880d-460eaf109ead" (UID: "e1ba85ff-3646-4a98-880d-460eaf109ead"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.239631 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg8wd\" (UniqueName: \"kubernetes.io/projected/e1ba85ff-3646-4a98-880d-460eaf109ead-kube-api-access-tg8wd\") pod \"e1ba85ff-3646-4a98-880d-460eaf109ead\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.239691 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-config-data\") pod \"e1ba85ff-3646-4a98-880d-460eaf109ead\" (UID: \"e1ba85ff-3646-4a98-880d-460eaf109ead\") " Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.240174 4802 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1ba85ff-3646-4a98-880d-460eaf109ead-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.242094 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1ba85ff-3646-4a98-880d-460eaf109ead-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e1ba85ff-3646-4a98-880d-460eaf109ead" (UID: "e1ba85ff-3646-4a98-880d-460eaf109ead"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.245187 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-scripts" (OuterVolumeSpecName: "scripts") pod "e1ba85ff-3646-4a98-880d-460eaf109ead" (UID: "e1ba85ff-3646-4a98-880d-460eaf109ead"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.255534 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1ba85ff-3646-4a98-880d-460eaf109ead-kube-api-access-tg8wd" (OuterVolumeSpecName: "kube-api-access-tg8wd") pod "e1ba85ff-3646-4a98-880d-460eaf109ead" (UID: "e1ba85ff-3646-4a98-880d-460eaf109ead"). InnerVolumeSpecName "kube-api-access-tg8wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.269297 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e1ba85ff-3646-4a98-880d-460eaf109ead" (UID: "e1ba85ff-3646-4a98-880d-460eaf109ead"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.312078 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1ba85ff-3646-4a98-880d-460eaf109ead" (UID: "e1ba85ff-3646-4a98-880d-460eaf109ead"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.342652 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg8wd\" (UniqueName: \"kubernetes.io/projected/e1ba85ff-3646-4a98-880d-460eaf109ead-kube-api-access-tg8wd\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.342692 4802 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1ba85ff-3646-4a98-880d-460eaf109ead-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.342705 4802 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.342716 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.342734 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.367588 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-config-data" (OuterVolumeSpecName: "config-data") pod "e1ba85ff-3646-4a98-880d-460eaf109ead" (UID: "e1ba85ff-3646-4a98-880d-460eaf109ead"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.374054 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9" path="/var/lib/kubelet/pods/0ab1386a-37b3-4fd5-a3c6-56fd5434b7b9/volumes" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.444324 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1ba85ff-3646-4a98-880d-460eaf109ead-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.832557 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"92739001-1a5b-465c-a6f7-728d00aeadfd","Type":"ContainerStarted","Data":"3f62c5b950c778f7c523b96abf71de6f34ead2fa688f929b6b242547e2476ce7"} Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.833077 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.837660 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1ba85ff-3646-4a98-880d-460eaf109ead","Type":"ContainerDied","Data":"679d1ff892c1b162c2a4e50620ed2a689da871e46b25bc2f8203c9dd108b4c8f"} Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.837716 4802 scope.go:117] "RemoveContainer" containerID="5bbdd640ea90cbb210219c9a2bfc6481471cc1714ee56f7bdcacd9640b29ee72" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.837760 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.849916 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.101653968 podStartE2EDuration="2.849898455s" podCreationTimestamp="2025-10-04 05:08:38 +0000 UTC" firstStartedPulling="2025-10-04 05:08:39.642256048 +0000 UTC m=+1362.050256663" lastFinishedPulling="2025-10-04 05:08:40.390500525 +0000 UTC m=+1362.798501150" observedRunningTime="2025-10-04 05:08:40.848049082 +0000 UTC m=+1363.256049707" watchObservedRunningTime="2025-10-04 05:08:40.849898455 +0000 UTC m=+1363.257899100" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.869780 4802 scope.go:117] "RemoveContainer" containerID="e22fcee277359931e1645fa4058832549fd547f6ff5bee1e5f7e741b63fa4a1c" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.876334 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.891119 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.892440 4802 scope.go:117] "RemoveContainer" containerID="dccace3a4b152212113da5966dc46ec37b68bd19f26feb336c82f799d0dd81b9" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.900304 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:08:40 crc kubenswrapper[4802]: E1004 05:08:40.900760 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="ceilometer-central-agent" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.900784 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="ceilometer-central-agent" Oct 04 05:08:40 crc kubenswrapper[4802]: E1004 05:08:40.900822 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="proxy-httpd" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.900832 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="proxy-httpd" Oct 04 05:08:40 crc kubenswrapper[4802]: E1004 05:08:40.900849 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="sg-core" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.900860 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="sg-core" Oct 04 05:08:40 crc kubenswrapper[4802]: E1004 05:08:40.900881 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="ceilometer-notification-agent" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.900889 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="ceilometer-notification-agent" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.901080 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="proxy-httpd" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.901132 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="ceilometer-notification-agent" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.901146 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="ceilometer-central-agent" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.901172 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" containerName="sg-core" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.905678 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.909962 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.911986 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.912883 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.912884 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.933188 4802 scope.go:117] "RemoveContainer" containerID="7e09ee2202022c21290cc13a04dcffa4595e532706a3c62b894422b1371e86b3" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.951887 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-run-httpd\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.952013 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.952078 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm5f9\" (UniqueName: \"kubernetes.io/projected/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-kube-api-access-vm5f9\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.952141 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-log-httpd\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.952206 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-config-data\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.952261 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-scripts\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.952302 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:40 crc kubenswrapper[4802]: I1004 05:08:40.952319 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.053710 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.053767 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.053842 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-run-httpd\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.053900 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.053941 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm5f9\" (UniqueName: \"kubernetes.io/projected/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-kube-api-access-vm5f9\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.053972 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-log-httpd\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.054003 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-config-data\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.054054 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-scripts\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.054410 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-run-httpd\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.054953 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-log-httpd\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.059129 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-scripts\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.059352 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.059410 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-config-data\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.060093 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.066934 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.071263 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm5f9\" (UniqueName: \"kubernetes.io/projected/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-kube-api-access-vm5f9\") pod \"ceilometer-0\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.240066 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.714078 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:08:41 crc kubenswrapper[4802]: W1004 05:08:41.728244 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f74aeeb_8b92_4e1f_8588_a41ee53f7259.slice/crio-609e70130c656efb34a8aa109b11411d8c716ece5bc9190154677125960d8607 WatchSource:0}: Error finding container 609e70130c656efb34a8aa109b11411d8c716ece5bc9190154677125960d8607: Status 404 returned error can't find the container with id 609e70130c656efb34a8aa109b11411d8c716ece5bc9190154677125960d8607 Oct 04 05:08:41 crc kubenswrapper[4802]: I1004 05:08:41.845488 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f74aeeb-8b92-4e1f-8588-a41ee53f7259","Type":"ContainerStarted","Data":"609e70130c656efb34a8aa109b11411d8c716ece5bc9190154677125960d8607"} Oct 04 05:08:42 crc kubenswrapper[4802]: I1004 05:08:42.369538 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1ba85ff-3646-4a98-880d-460eaf109ead" path="/var/lib/kubelet/pods/e1ba85ff-3646-4a98-880d-460eaf109ead/volumes" Oct 04 05:08:42 crc kubenswrapper[4802]: I1004 05:08:42.856822 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f74aeeb-8b92-4e1f-8588-a41ee53f7259","Type":"ContainerStarted","Data":"78685555954ce23468095d54f93a56cd08d725f086dbf2bb1d465e3bae68b0fe"} Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.246295 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.718990 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-b7dxc"] Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.734764 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.743158 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.743605 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.751445 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-b7dxc"] Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.820999 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-scripts\") pod \"nova-cell0-cell-mapping-b7dxc\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.821128 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b7dxc\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.821179 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66n5d\" (UniqueName: \"kubernetes.io/projected/6ac00fcb-556f-496a-85e6-50e1985c617a-kube-api-access-66n5d\") pod \"nova-cell0-cell-mapping-b7dxc\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.821298 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-config-data\") pod \"nova-cell0-cell-mapping-b7dxc\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.866509 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.867602 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.886994 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.895998 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.906856 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f74aeeb-8b92-4e1f-8588-a41ee53f7259","Type":"ContainerStarted","Data":"f374e654858d2f071d180e75cbf5bcf444b0d4fda19fc0c0ce21c04253cff8e5"} Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.923167 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96666d3e-d058-40bf-95b4-fa099cd8694d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"96666d3e-d058-40bf-95b4-fa099cd8694d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.923437 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b7dxc\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.923570 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66n5d\" (UniqueName: \"kubernetes.io/projected/6ac00fcb-556f-496a-85e6-50e1985c617a-kube-api-access-66n5d\") pod \"nova-cell0-cell-mapping-b7dxc\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.923802 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8c6m\" (UniqueName: \"kubernetes.io/projected/96666d3e-d058-40bf-95b4-fa099cd8694d-kube-api-access-z8c6m\") pod \"nova-cell1-novncproxy-0\" (UID: \"96666d3e-d058-40bf-95b4-fa099cd8694d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.923971 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96666d3e-d058-40bf-95b4-fa099cd8694d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"96666d3e-d058-40bf-95b4-fa099cd8694d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.924029 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-config-data\") pod \"nova-cell0-cell-mapping-b7dxc\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.924093 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-scripts\") pod \"nova-cell0-cell-mapping-b7dxc\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.932148 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.947671 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b7dxc\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.952076 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.954338 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-scripts\") pod \"nova-cell0-cell-mapping-b7dxc\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.955745 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.964424 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-config-data\") pod \"nova-cell0-cell-mapping-b7dxc\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.976171 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:43 crc kubenswrapper[4802]: I1004 05:08:43.988333 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66n5d\" (UniqueName: \"kubernetes.io/projected/6ac00fcb-556f-496a-85e6-50e1985c617a-kube-api-access-66n5d\") pod \"nova-cell0-cell-mapping-b7dxc\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.001228 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.002705 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.008505 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.021630 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.026787 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96666d3e-d058-40bf-95b4-fa099cd8694d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"96666d3e-d058-40bf-95b4-fa099cd8694d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.026839 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48596acf-5a97-46e9-a84c-5c4f8e07a998-logs\") pod \"nova-api-0\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.026879 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34ba8384-dd5a-4b58-8151-593782da615a-logs\") pod \"nova-metadata-0\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " pod="openstack/nova-metadata-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.026906 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ba8384-dd5a-4b58-8151-593782da615a-config-data\") pod \"nova-metadata-0\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " pod="openstack/nova-metadata-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.026930 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96666d3e-d058-40bf-95b4-fa099cd8694d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"96666d3e-d058-40bf-95b4-fa099cd8694d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.027004 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48596acf-5a97-46e9-a84c-5c4f8e07a998-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.027043 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48596acf-5a97-46e9-a84c-5c4f8e07a998-config-data\") pod \"nova-api-0\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.027064 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8c6m\" (UniqueName: \"kubernetes.io/projected/96666d3e-d058-40bf-95b4-fa099cd8694d-kube-api-access-z8c6m\") pod \"nova-cell1-novncproxy-0\" (UID: \"96666d3e-d058-40bf-95b4-fa099cd8694d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.027087 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwblr\" (UniqueName: \"kubernetes.io/projected/48596acf-5a97-46e9-a84c-5c4f8e07a998-kube-api-access-pwblr\") pod \"nova-api-0\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.027121 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpxpj\" (UniqueName: \"kubernetes.io/projected/34ba8384-dd5a-4b58-8151-593782da615a-kube-api-access-jpxpj\") pod \"nova-metadata-0\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " pod="openstack/nova-metadata-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.027143 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ba8384-dd5a-4b58-8151-593782da615a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " pod="openstack/nova-metadata-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.031213 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96666d3e-d058-40bf-95b4-fa099cd8694d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"96666d3e-d058-40bf-95b4-fa099cd8694d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.040818 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96666d3e-d058-40bf-95b4-fa099cd8694d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"96666d3e-d058-40bf-95b4-fa099cd8694d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.070751 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8c6m\" (UniqueName: \"kubernetes.io/projected/96666d3e-d058-40bf-95b4-fa099cd8694d-kube-api-access-z8c6m\") pod \"nova-cell1-novncproxy-0\" (UID: \"96666d3e-d058-40bf-95b4-fa099cd8694d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.101075 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.106592 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.106671 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-8gbmv"] Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.121199 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.128249 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48596acf-5a97-46e9-a84c-5c4f8e07a998-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.128333 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48596acf-5a97-46e9-a84c-5c4f8e07a998-config-data\") pod \"nova-api-0\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.128382 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwblr\" (UniqueName: \"kubernetes.io/projected/48596acf-5a97-46e9-a84c-5c4f8e07a998-kube-api-access-pwblr\") pod \"nova-api-0\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.128427 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpxpj\" (UniqueName: \"kubernetes.io/projected/34ba8384-dd5a-4b58-8151-593782da615a-kube-api-access-jpxpj\") pod \"nova-metadata-0\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " pod="openstack/nova-metadata-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.128460 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ba8384-dd5a-4b58-8151-593782da615a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " pod="openstack/nova-metadata-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.128511 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48596acf-5a97-46e9-a84c-5c4f8e07a998-logs\") pod \"nova-api-0\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.128563 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34ba8384-dd5a-4b58-8151-593782da615a-logs\") pod \"nova-metadata-0\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " pod="openstack/nova-metadata-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.128601 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ba8384-dd5a-4b58-8151-593782da615a-config-data\") pod \"nova-metadata-0\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " pod="openstack/nova-metadata-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.131212 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48596acf-5a97-46e9-a84c-5c4f8e07a998-logs\") pod \"nova-api-0\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.132820 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34ba8384-dd5a-4b58-8151-593782da615a-logs\") pod \"nova-metadata-0\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " pod="openstack/nova-metadata-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.137327 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ba8384-dd5a-4b58-8151-593782da615a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " pod="openstack/nova-metadata-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.139516 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-8gbmv"] Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.146740 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48596acf-5a97-46e9-a84c-5c4f8e07a998-config-data\") pod \"nova-api-0\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.152861 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ba8384-dd5a-4b58-8151-593782da615a-config-data\") pod \"nova-metadata-0\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " pod="openstack/nova-metadata-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.155179 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48596acf-5a97-46e9-a84c-5c4f8e07a998-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.166551 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpxpj\" (UniqueName: \"kubernetes.io/projected/34ba8384-dd5a-4b58-8151-593782da615a-kube-api-access-jpxpj\") pod \"nova-metadata-0\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " pod="openstack/nova-metadata-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.167768 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwblr\" (UniqueName: \"kubernetes.io/projected/48596acf-5a97-46e9-a84c-5c4f8e07a998-kube-api-access-pwblr\") pod \"nova-api-0\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.167813 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.168949 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.170829 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.186548 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.231545 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-config\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.231635 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-dns-svc\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.231676 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070ca973-8f00-47fe-9552-99d45c1c0ec0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"070ca973-8f00-47fe-9552-99d45c1c0ec0\") " pod="openstack/nova-scheduler-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.231756 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070ca973-8f00-47fe-9552-99d45c1c0ec0-config-data\") pod \"nova-scheduler-0\" (UID: \"070ca973-8f00-47fe-9552-99d45c1c0ec0\") " pod="openstack/nova-scheduler-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.231786 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.231854 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.231870 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pzgv\" (UniqueName: \"kubernetes.io/projected/070ca973-8f00-47fe-9552-99d45c1c0ec0-kube-api-access-2pzgv\") pod \"nova-scheduler-0\" (UID: \"070ca973-8f00-47fe-9552-99d45c1c0ec0\") " pod="openstack/nova-scheduler-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.231923 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t86hx\" (UniqueName: \"kubernetes.io/projected/6c4b637c-9981-4c46-a657-16e2d39e0b31-kube-api-access-t86hx\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.335084 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-config\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.335474 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-dns-svc\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.335505 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070ca973-8f00-47fe-9552-99d45c1c0ec0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"070ca973-8f00-47fe-9552-99d45c1c0ec0\") " pod="openstack/nova-scheduler-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.335583 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070ca973-8f00-47fe-9552-99d45c1c0ec0-config-data\") pod \"nova-scheduler-0\" (UID: \"070ca973-8f00-47fe-9552-99d45c1c0ec0\") " pod="openstack/nova-scheduler-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.335617 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.335719 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.335746 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pzgv\" (UniqueName: \"kubernetes.io/projected/070ca973-8f00-47fe-9552-99d45c1c0ec0-kube-api-access-2pzgv\") pod \"nova-scheduler-0\" (UID: \"070ca973-8f00-47fe-9552-99d45c1c0ec0\") " pod="openstack/nova-scheduler-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.335811 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t86hx\" (UniqueName: \"kubernetes.io/projected/6c4b637c-9981-4c46-a657-16e2d39e0b31-kube-api-access-t86hx\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.336126 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-config\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.336286 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-dns-svc\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.336758 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.337311 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.344337 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070ca973-8f00-47fe-9552-99d45c1c0ec0-config-data\") pod \"nova-scheduler-0\" (UID: \"070ca973-8f00-47fe-9552-99d45c1c0ec0\") " pod="openstack/nova-scheduler-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.345221 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070ca973-8f00-47fe-9552-99d45c1c0ec0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"070ca973-8f00-47fe-9552-99d45c1c0ec0\") " pod="openstack/nova-scheduler-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.354385 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pzgv\" (UniqueName: \"kubernetes.io/projected/070ca973-8f00-47fe-9552-99d45c1c0ec0-kube-api-access-2pzgv\") pod \"nova-scheduler-0\" (UID: \"070ca973-8f00-47fe-9552-99d45c1c0ec0\") " pod="openstack/nova-scheduler-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.358980 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t86hx\" (UniqueName: \"kubernetes.io/projected/6c4b637c-9981-4c46-a657-16e2d39e0b31-kube-api-access-t86hx\") pod \"dnsmasq-dns-566b5b7845-8gbmv\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.425343 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.463326 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.478489 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.599255 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.657787 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-b7dxc"] Oct 04 05:08:44 crc kubenswrapper[4802]: W1004 05:08:44.658959 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ac00fcb_556f_496a_85e6_50e1985c617a.slice/crio-224fec3eaa1f2933c98d33536f1e576c9727bb4e06ca7162e18da5df061a9159 WatchSource:0}: Error finding container 224fec3eaa1f2933c98d33536f1e576c9727bb4e06ca7162e18da5df061a9159: Status 404 returned error can't find the container with id 224fec3eaa1f2933c98d33536f1e576c9727bb4e06ca7162e18da5df061a9159 Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.767750 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.941883 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f74aeeb-8b92-4e1f-8588-a41ee53f7259","Type":"ContainerStarted","Data":"6c4e33375fc9af2f36499f2bac755ab3d5eedb0a12f8886190decd18014e9ce2"} Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.943833 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"96666d3e-d058-40bf-95b4-fa099cd8694d","Type":"ContainerStarted","Data":"0b99eb213220edb7610ea7298daf77dcd7881019cbef0faa1996812978b864b8"} Oct 04 05:08:44 crc kubenswrapper[4802]: I1004 05:08:44.948556 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b7dxc" event={"ID":"6ac00fcb-556f-496a-85e6-50e1985c617a","Type":"ContainerStarted","Data":"224fec3eaa1f2933c98d33536f1e576c9727bb4e06ca7162e18da5df061a9159"} Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.054066 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.068881 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7pgm8"] Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.070394 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: W1004 05:08:45.073021 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ba8384_dd5a_4b58_8151_593782da615a.slice/crio-b456aba2ce5074f3776a0649f69d98278fef197ddd3521fa532e8a0c9348b4f6 WatchSource:0}: Error finding container b456aba2ce5074f3776a0649f69d98278fef197ddd3521fa532e8a0c9348b4f6: Status 404 returned error can't find the container with id b456aba2ce5074f3776a0649f69d98278fef197ddd3521fa532e8a0c9348b4f6 Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.084261 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.084617 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.108225 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.121137 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7pgm8"] Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.235784 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-8gbmv"] Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.264164 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-config-data\") pod \"nova-cell1-conductor-db-sync-7pgm8\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.264240 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-scripts\") pod \"nova-cell1-conductor-db-sync-7pgm8\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.264273 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7pgm8\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.264364 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d5mq\" (UniqueName: \"kubernetes.io/projected/a9896243-f600-4461-ac5c-e22070c86c51-kube-api-access-8d5mq\") pod \"nova-cell1-conductor-db-sync-7pgm8\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.303763 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:08:45 crc kubenswrapper[4802]: W1004 05:08:45.307254 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod070ca973_8f00_47fe_9552_99d45c1c0ec0.slice/crio-a5b1a2f6f7f102effdd5079a1434d21f37e0d8013ea470f61a05a67247ce6438 WatchSource:0}: Error finding container a5b1a2f6f7f102effdd5079a1434d21f37e0d8013ea470f61a05a67247ce6438: Status 404 returned error can't find the container with id a5b1a2f6f7f102effdd5079a1434d21f37e0d8013ea470f61a05a67247ce6438 Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.365244 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-scripts\") pod \"nova-cell1-conductor-db-sync-7pgm8\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.365894 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7pgm8\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.366000 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d5mq\" (UniqueName: \"kubernetes.io/projected/a9896243-f600-4461-ac5c-e22070c86c51-kube-api-access-8d5mq\") pod \"nova-cell1-conductor-db-sync-7pgm8\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.366063 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-config-data\") pod \"nova-cell1-conductor-db-sync-7pgm8\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.370798 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-config-data\") pod \"nova-cell1-conductor-db-sync-7pgm8\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.370936 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7pgm8\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.372457 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-scripts\") pod \"nova-cell1-conductor-db-sync-7pgm8\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.383358 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d5mq\" (UniqueName: \"kubernetes.io/projected/a9896243-f600-4461-ac5c-e22070c86c51-kube-api-access-8d5mq\") pod \"nova-cell1-conductor-db-sync-7pgm8\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.423539 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.894978 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7pgm8"] Oct 04 05:08:45 crc kubenswrapper[4802]: W1004 05:08:45.903791 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9896243_f600_4461_ac5c_e22070c86c51.slice/crio-54f9088880a2568e7025f04c21ef446a7efacc652ae79fc5ed3c6e03355ce9c3 WatchSource:0}: Error finding container 54f9088880a2568e7025f04c21ef446a7efacc652ae79fc5ed3c6e03355ce9c3: Status 404 returned error can't find the container with id 54f9088880a2568e7025f04c21ef446a7efacc652ae79fc5ed3c6e03355ce9c3 Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.973172 4802 generic.go:334] "Generic (PLEG): container finished" podID="6c4b637c-9981-4c46-a657-16e2d39e0b31" containerID="4a1b96c08d0ed5e4c77c79cd8c61b5d0bff89d2899aa07dad7304f4579d67997" exitCode=0 Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.973495 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" event={"ID":"6c4b637c-9981-4c46-a657-16e2d39e0b31","Type":"ContainerDied","Data":"4a1b96c08d0ed5e4c77c79cd8c61b5d0bff89d2899aa07dad7304f4579d67997"} Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.973556 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" event={"ID":"6c4b637c-9981-4c46-a657-16e2d39e0b31","Type":"ContainerStarted","Data":"8e0ab1d64cb9820d9de34feb579089c5090a810159b7db745eb07413563a5237"} Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.976223 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7pgm8" event={"ID":"a9896243-f600-4461-ac5c-e22070c86c51","Type":"ContainerStarted","Data":"54f9088880a2568e7025f04c21ef446a7efacc652ae79fc5ed3c6e03355ce9c3"} Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.987590 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"070ca973-8f00-47fe-9552-99d45c1c0ec0","Type":"ContainerStarted","Data":"a5b1a2f6f7f102effdd5079a1434d21f37e0d8013ea470f61a05a67247ce6438"} Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.990808 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48596acf-5a97-46e9-a84c-5c4f8e07a998","Type":"ContainerStarted","Data":"f09b2619b67a70defc5049679288aadf251c6166cc05960608bac4a34d2cbe21"} Oct 04 05:08:45 crc kubenswrapper[4802]: I1004 05:08:45.994852 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b7dxc" event={"ID":"6ac00fcb-556f-496a-85e6-50e1985c617a","Type":"ContainerStarted","Data":"4b77a608b32ea901cba6731072f8ddbcf0794fae0851a787033ca9f4505b8808"} Oct 04 05:08:46 crc kubenswrapper[4802]: I1004 05:08:46.004803 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34ba8384-dd5a-4b58-8151-593782da615a","Type":"ContainerStarted","Data":"b456aba2ce5074f3776a0649f69d98278fef197ddd3521fa532e8a0c9348b4f6"} Oct 04 05:08:46 crc kubenswrapper[4802]: I1004 05:08:46.015035 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-b7dxc" podStartSLOduration=3.015016724 podStartE2EDuration="3.015016724s" podCreationTimestamp="2025-10-04 05:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:08:46.014238752 +0000 UTC m=+1368.422239377" watchObservedRunningTime="2025-10-04 05:08:46.015016724 +0000 UTC m=+1368.423017359" Oct 04 05:08:48 crc kubenswrapper[4802]: I1004 05:08:48.002711 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 04 05:08:48 crc kubenswrapper[4802]: I1004 05:08:48.021482 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.051309 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" event={"ID":"6c4b637c-9981-4c46-a657-16e2d39e0b31","Type":"ContainerStarted","Data":"c6591bf772d2b7ddef20c7d7bf1de4e34de29238c7121d2c23bca920b7683961"} Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.051999 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.053881 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34ba8384-dd5a-4b58-8151-593782da615a","Type":"ContainerStarted","Data":"9a5850680a8eec3df42e163731d8524413a4593dee56ac9839414be74dd0c3f6"} Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.053913 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34ba8384-dd5a-4b58-8151-593782da615a","Type":"ContainerStarted","Data":"170b2b484ef067a5895071fc8a0586f43fba7bf4f9e3a93919d3eb2c3d9fac2b"} Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.054093 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34ba8384-dd5a-4b58-8151-593782da615a" containerName="nova-metadata-log" containerID="cri-o://170b2b484ef067a5895071fc8a0586f43fba7bf4f9e3a93919d3eb2c3d9fac2b" gracePeriod=30 Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.054288 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34ba8384-dd5a-4b58-8151-593782da615a" containerName="nova-metadata-metadata" containerID="cri-o://9a5850680a8eec3df42e163731d8524413a4593dee56ac9839414be74dd0c3f6" gracePeriod=30 Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.055427 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7pgm8" event={"ID":"a9896243-f600-4461-ac5c-e22070c86c51","Type":"ContainerStarted","Data":"42f1ad68781c477f6ed3ed9eee01618c0deb791f59c951ba4f9302347df49f38"} Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.058129 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"070ca973-8f00-47fe-9552-99d45c1c0ec0","Type":"ContainerStarted","Data":"6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25"} Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.067595 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f74aeeb-8b92-4e1f-8588-a41ee53f7259","Type":"ContainerStarted","Data":"dc330fe3044f910c706ae3c60b332f433939af78a90be76bfead0857e1b499d6"} Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.068518 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.070426 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"96666d3e-d058-40bf-95b4-fa099cd8694d","Type":"ContainerStarted","Data":"f6d58e942f23c299050f0d5a9bbb191786883d98ed02def333191655d0c7fd46"} Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.070559 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="96666d3e-d058-40bf-95b4-fa099cd8694d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f6d58e942f23c299050f0d5a9bbb191786883d98ed02def333191655d0c7fd46" gracePeriod=30 Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.080314 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48596acf-5a97-46e9-a84c-5c4f8e07a998","Type":"ContainerStarted","Data":"66a4397db3aa3b564541c72bf6078b02fa554f07664f5d5cd2c9f149e9548cf7"} Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.080361 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48596acf-5a97-46e9-a84c-5c4f8e07a998","Type":"ContainerStarted","Data":"4bc32f197b4992b1b1578f1042653adfedfa919d8468fe73dcff8231ebb786be"} Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.085006 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" podStartSLOduration=5.084986087 podStartE2EDuration="5.084986087s" podCreationTimestamp="2025-10-04 05:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:08:49.074029786 +0000 UTC m=+1371.482030431" watchObservedRunningTime="2025-10-04 05:08:49.084986087 +0000 UTC m=+1371.492986712" Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.100851 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.93043382 podStartE2EDuration="6.100829918s" podCreationTimestamp="2025-10-04 05:08:43 +0000 UTC" firstStartedPulling="2025-10-04 05:08:44.795323925 +0000 UTC m=+1367.203324550" lastFinishedPulling="2025-10-04 05:08:47.965720023 +0000 UTC m=+1370.373720648" observedRunningTime="2025-10-04 05:08:49.095854357 +0000 UTC m=+1371.503854992" watchObservedRunningTime="2025-10-04 05:08:49.100829918 +0000 UTC m=+1371.508830543" Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.106756 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.128858 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.101656671 podStartE2EDuration="6.128839795s" podCreationTimestamp="2025-10-04 05:08:43 +0000 UTC" firstStartedPulling="2025-10-04 05:08:45.077791881 +0000 UTC m=+1367.485792496" lastFinishedPulling="2025-10-04 05:08:48.104974995 +0000 UTC m=+1370.512975620" observedRunningTime="2025-10-04 05:08:49.124207293 +0000 UTC m=+1371.532207918" watchObservedRunningTime="2025-10-04 05:08:49.128839795 +0000 UTC m=+1371.536840440" Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.172914 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.027775688 podStartE2EDuration="9.172869518s" podCreationTimestamp="2025-10-04 05:08:40 +0000 UTC" firstStartedPulling="2025-10-04 05:08:41.731003243 +0000 UTC m=+1364.139003868" lastFinishedPulling="2025-10-04 05:08:46.876097073 +0000 UTC m=+1369.284097698" observedRunningTime="2025-10-04 05:08:49.154378762 +0000 UTC m=+1371.562379397" watchObservedRunningTime="2025-10-04 05:08:49.172869518 +0000 UTC m=+1371.580870143" Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.183342 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.493987683 podStartE2EDuration="5.183317905s" podCreationTimestamp="2025-10-04 05:08:44 +0000 UTC" firstStartedPulling="2025-10-04 05:08:45.308828384 +0000 UTC m=+1367.716829009" lastFinishedPulling="2025-10-04 05:08:47.998158606 +0000 UTC m=+1370.406159231" observedRunningTime="2025-10-04 05:08:49.175936845 +0000 UTC m=+1371.583937470" watchObservedRunningTime="2025-10-04 05:08:49.183317905 +0000 UTC m=+1371.591318530" Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.191078 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.209632 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7pgm8" podStartSLOduration=4.209607943 podStartE2EDuration="4.209607943s" podCreationTimestamp="2025-10-04 05:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:08:49.199951738 +0000 UTC m=+1371.607952373" watchObservedRunningTime="2025-10-04 05:08:49.209607943 +0000 UTC m=+1371.617608588" Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.428767 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.428819 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 04 05:08:49 crc kubenswrapper[4802]: I1004 05:08:49.600661 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.094894 4802 generic.go:334] "Generic (PLEG): container finished" podID="34ba8384-dd5a-4b58-8151-593782da615a" containerID="9a5850680a8eec3df42e163731d8524413a4593dee56ac9839414be74dd0c3f6" exitCode=0 Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.094928 4802 generic.go:334] "Generic (PLEG): container finished" podID="34ba8384-dd5a-4b58-8151-593782da615a" containerID="170b2b484ef067a5895071fc8a0586f43fba7bf4f9e3a93919d3eb2c3d9fac2b" exitCode=143 Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.094967 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34ba8384-dd5a-4b58-8151-593782da615a","Type":"ContainerDied","Data":"9a5850680a8eec3df42e163731d8524413a4593dee56ac9839414be74dd0c3f6"} Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.095028 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34ba8384-dd5a-4b58-8151-593782da615a","Type":"ContainerDied","Data":"170b2b484ef067a5895071fc8a0586f43fba7bf4f9e3a93919d3eb2c3d9fac2b"} Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.095057 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34ba8384-dd5a-4b58-8151-593782da615a","Type":"ContainerDied","Data":"b456aba2ce5074f3776a0649f69d98278fef197ddd3521fa532e8a0c9348b4f6"} Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.095067 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b456aba2ce5074f3776a0649f69d98278fef197ddd3521fa532e8a0c9348b4f6" Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.118776 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.140414 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.150553762 podStartE2EDuration="7.140395674s" podCreationTimestamp="2025-10-04 05:08:43 +0000 UTC" firstStartedPulling="2025-10-04 05:08:45.112354384 +0000 UTC m=+1367.520355009" lastFinishedPulling="2025-10-04 05:08:48.102196296 +0000 UTC m=+1370.510196921" observedRunningTime="2025-10-04 05:08:49.237525147 +0000 UTC m=+1371.645525792" watchObservedRunningTime="2025-10-04 05:08:50.140395674 +0000 UTC m=+1372.548396289" Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.205484 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ba8384-dd5a-4b58-8151-593782da615a-combined-ca-bundle\") pod \"34ba8384-dd5a-4b58-8151-593782da615a\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.205565 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpxpj\" (UniqueName: \"kubernetes.io/projected/34ba8384-dd5a-4b58-8151-593782da615a-kube-api-access-jpxpj\") pod \"34ba8384-dd5a-4b58-8151-593782da615a\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.205754 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ba8384-dd5a-4b58-8151-593782da615a-config-data\") pod \"34ba8384-dd5a-4b58-8151-593782da615a\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.205772 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34ba8384-dd5a-4b58-8151-593782da615a-logs\") pod \"34ba8384-dd5a-4b58-8151-593782da615a\" (UID: \"34ba8384-dd5a-4b58-8151-593782da615a\") " Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.208003 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34ba8384-dd5a-4b58-8151-593782da615a-logs" (OuterVolumeSpecName: "logs") pod "34ba8384-dd5a-4b58-8151-593782da615a" (UID: "34ba8384-dd5a-4b58-8151-593782da615a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.213849 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ba8384-dd5a-4b58-8151-593782da615a-kube-api-access-jpxpj" (OuterVolumeSpecName: "kube-api-access-jpxpj") pod "34ba8384-dd5a-4b58-8151-593782da615a" (UID: "34ba8384-dd5a-4b58-8151-593782da615a"). InnerVolumeSpecName "kube-api-access-jpxpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.245660 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ba8384-dd5a-4b58-8151-593782da615a-config-data" (OuterVolumeSpecName: "config-data") pod "34ba8384-dd5a-4b58-8151-593782da615a" (UID: "34ba8384-dd5a-4b58-8151-593782da615a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.247847 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ba8384-dd5a-4b58-8151-593782da615a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34ba8384-dd5a-4b58-8151-593782da615a" (UID: "34ba8384-dd5a-4b58-8151-593782da615a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.308052 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpxpj\" (UniqueName: \"kubernetes.io/projected/34ba8384-dd5a-4b58-8151-593782da615a-kube-api-access-jpxpj\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.308088 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ba8384-dd5a-4b58-8151-593782da615a-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.308101 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34ba8384-dd5a-4b58-8151-593782da615a-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:50 crc kubenswrapper[4802]: I1004 05:08:50.308110 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ba8384-dd5a-4b58-8151-593782da615a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.102323 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.123354 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.139700 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.153872 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:51 crc kubenswrapper[4802]: E1004 05:08:51.154373 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ba8384-dd5a-4b58-8151-593782da615a" containerName="nova-metadata-log" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.154393 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ba8384-dd5a-4b58-8151-593782da615a" containerName="nova-metadata-log" Oct 04 05:08:51 crc kubenswrapper[4802]: E1004 05:08:51.154404 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ba8384-dd5a-4b58-8151-593782da615a" containerName="nova-metadata-metadata" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.154411 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ba8384-dd5a-4b58-8151-593782da615a" containerName="nova-metadata-metadata" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.154589 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ba8384-dd5a-4b58-8151-593782da615a" containerName="nova-metadata-metadata" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.154603 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ba8384-dd5a-4b58-8151-593782da615a" containerName="nova-metadata-log" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.156522 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.158539 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.158735 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.171097 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.223028 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.223088 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drdz6\" (UniqueName: \"kubernetes.io/projected/e2bdc62d-0b58-4043-9495-94a20c94bbe1-kube-api-access-drdz6\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.223243 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2bdc62d-0b58-4043-9495-94a20c94bbe1-logs\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.223298 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.223318 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-config-data\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.325428 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.325497 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drdz6\" (UniqueName: \"kubernetes.io/projected/e2bdc62d-0b58-4043-9495-94a20c94bbe1-kube-api-access-drdz6\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.325618 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2bdc62d-0b58-4043-9495-94a20c94bbe1-logs\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.325678 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.325698 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-config-data\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.326787 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2bdc62d-0b58-4043-9495-94a20c94bbe1-logs\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.330225 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.330800 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-config-data\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.340041 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.346397 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drdz6\" (UniqueName: \"kubernetes.io/projected/e2bdc62d-0b58-4043-9495-94a20c94bbe1-kube-api-access-drdz6\") pod \"nova-metadata-0\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.485589 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:08:51 crc kubenswrapper[4802]: W1004 05:08:51.990301 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2bdc62d_0b58_4043_9495_94a20c94bbe1.slice/crio-d1a30aeec8d02714cc55eafa322848618362cbeda0431c2751f7b4eb18bb2fcf WatchSource:0}: Error finding container d1a30aeec8d02714cc55eafa322848618362cbeda0431c2751f7b4eb18bb2fcf: Status 404 returned error can't find the container with id d1a30aeec8d02714cc55eafa322848618362cbeda0431c2751f7b4eb18bb2fcf Oct 04 05:08:51 crc kubenswrapper[4802]: I1004 05:08:51.992213 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:52 crc kubenswrapper[4802]: I1004 05:08:52.113200 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2bdc62d-0b58-4043-9495-94a20c94bbe1","Type":"ContainerStarted","Data":"d1a30aeec8d02714cc55eafa322848618362cbeda0431c2751f7b4eb18bb2fcf"} Oct 04 05:08:52 crc kubenswrapper[4802]: I1004 05:08:52.371774 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ba8384-dd5a-4b58-8151-593782da615a" path="/var/lib/kubelet/pods/34ba8384-dd5a-4b58-8151-593782da615a/volumes" Oct 04 05:08:53 crc kubenswrapper[4802]: I1004 05:08:53.123844 4802 generic.go:334] "Generic (PLEG): container finished" podID="6ac00fcb-556f-496a-85e6-50e1985c617a" containerID="4b77a608b32ea901cba6731072f8ddbcf0794fae0851a787033ca9f4505b8808" exitCode=0 Oct 04 05:08:53 crc kubenswrapper[4802]: I1004 05:08:53.123924 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b7dxc" event={"ID":"6ac00fcb-556f-496a-85e6-50e1985c617a","Type":"ContainerDied","Data":"4b77a608b32ea901cba6731072f8ddbcf0794fae0851a787033ca9f4505b8808"} Oct 04 05:08:53 crc kubenswrapper[4802]: I1004 05:08:53.125719 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2bdc62d-0b58-4043-9495-94a20c94bbe1","Type":"ContainerStarted","Data":"d70ee0d204b3b27bafba09d20c89fd48774a0bb5c1c4ed58aef05074f89abc34"} Oct 04 05:08:53 crc kubenswrapper[4802]: I1004 05:08:53.125764 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2bdc62d-0b58-4043-9495-94a20c94bbe1","Type":"ContainerStarted","Data":"ba0097160e876cbc26b3f0779cd13544dff06b14bcbf668b813a0d62b3866f26"} Oct 04 05:08:53 crc kubenswrapper[4802]: I1004 05:08:53.166973 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.16695269 podStartE2EDuration="2.16695269s" podCreationTimestamp="2025-10-04 05:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:08:53.157574555 +0000 UTC m=+1375.565575190" watchObservedRunningTime="2025-10-04 05:08:53.16695269 +0000 UTC m=+1375.574953315" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.466885 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.470315 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.479827 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.560463 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-h9t6h"] Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.560721 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" podUID="fd505bc4-f9fa-4e50-a094-8f46c2d592a0" containerName="dnsmasq-dns" containerID="cri-o://956b5524c78e24043e783a3ca1515db2e34bce62446e593a359eb9c4edc63d29" gracePeriod=10 Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.563284 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.589241 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66n5d\" (UniqueName: \"kubernetes.io/projected/6ac00fcb-556f-496a-85e6-50e1985c617a-kube-api-access-66n5d\") pod \"6ac00fcb-556f-496a-85e6-50e1985c617a\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.589309 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-combined-ca-bundle\") pod \"6ac00fcb-556f-496a-85e6-50e1985c617a\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.589525 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-config-data\") pod \"6ac00fcb-556f-496a-85e6-50e1985c617a\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.589710 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-scripts\") pod \"6ac00fcb-556f-496a-85e6-50e1985c617a\" (UID: \"6ac00fcb-556f-496a-85e6-50e1985c617a\") " Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.599293 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac00fcb-556f-496a-85e6-50e1985c617a-kube-api-access-66n5d" (OuterVolumeSpecName: "kube-api-access-66n5d") pod "6ac00fcb-556f-496a-85e6-50e1985c617a" (UID: "6ac00fcb-556f-496a-85e6-50e1985c617a"). InnerVolumeSpecName "kube-api-access-66n5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.600199 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.602538 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-scripts" (OuterVolumeSpecName: "scripts") pod "6ac00fcb-556f-496a-85e6-50e1985c617a" (UID: "6ac00fcb-556f-496a-85e6-50e1985c617a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.627448 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-config-data" (OuterVolumeSpecName: "config-data") pod "6ac00fcb-556f-496a-85e6-50e1985c617a" (UID: "6ac00fcb-556f-496a-85e6-50e1985c617a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.635176 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.683091 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ac00fcb-556f-496a-85e6-50e1985c617a" (UID: "6ac00fcb-556f-496a-85e6-50e1985c617a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.698443 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.698791 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.699022 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66n5d\" (UniqueName: \"kubernetes.io/projected/6ac00fcb-556f-496a-85e6-50e1985c617a-kube-api-access-66n5d\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:54 crc kubenswrapper[4802]: I1004 05:08:54.699042 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac00fcb-556f-496a-85e6-50e1985c617a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.095343 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.148623 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b7dxc" event={"ID":"6ac00fcb-556f-496a-85e6-50e1985c617a","Type":"ContainerDied","Data":"224fec3eaa1f2933c98d33536f1e576c9727bb4e06ca7162e18da5df061a9159"} Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.148688 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="224fec3eaa1f2933c98d33536f1e576c9727bb4e06ca7162e18da5df061a9159" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.148761 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b7dxc" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.152350 4802 generic.go:334] "Generic (PLEG): container finished" podID="fd505bc4-f9fa-4e50-a094-8f46c2d592a0" containerID="956b5524c78e24043e783a3ca1515db2e34bce62446e593a359eb9c4edc63d29" exitCode=0 Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.153191 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" event={"ID":"fd505bc4-f9fa-4e50-a094-8f46c2d592a0","Type":"ContainerDied","Data":"956b5524c78e24043e783a3ca1515db2e34bce62446e593a359eb9c4edc63d29"} Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.153297 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" event={"ID":"fd505bc4-f9fa-4e50-a094-8f46c2d592a0","Type":"ContainerDied","Data":"73cbacb952f6c6a846a7aabb88d7320c54b298766f6b68c3f22a827474ed7330"} Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.153384 4802 scope.go:117] "RemoveContainer" containerID="956b5524c78e24043e783a3ca1515db2e34bce62446e593a359eb9c4edc63d29" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.153035 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-h9t6h" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.197424 4802 scope.go:117] "RemoveContainer" containerID="2c19364b47012b70d2f16c300cd4116d19e134e4262ef419fc88cb8c878ad046" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.204760 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.208215 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-ovsdbserver-sb\") pod \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.208317 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-dns-svc\") pod \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.208916 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcgtn\" (UniqueName: \"kubernetes.io/projected/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-kube-api-access-jcgtn\") pod \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.208975 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-ovsdbserver-nb\") pod \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.209089 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-config\") pod \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\" (UID: \"fd505bc4-f9fa-4e50-a094-8f46c2d592a0\") " Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.217872 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-kube-api-access-jcgtn" (OuterVolumeSpecName: "kube-api-access-jcgtn") pod "fd505bc4-f9fa-4e50-a094-8f46c2d592a0" (UID: "fd505bc4-f9fa-4e50-a094-8f46c2d592a0"). InnerVolumeSpecName "kube-api-access-jcgtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.223057 4802 scope.go:117] "RemoveContainer" containerID="956b5524c78e24043e783a3ca1515db2e34bce62446e593a359eb9c4edc63d29" Oct 04 05:08:55 crc kubenswrapper[4802]: E1004 05:08:55.223540 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956b5524c78e24043e783a3ca1515db2e34bce62446e593a359eb9c4edc63d29\": container with ID starting with 956b5524c78e24043e783a3ca1515db2e34bce62446e593a359eb9c4edc63d29 not found: ID does not exist" containerID="956b5524c78e24043e783a3ca1515db2e34bce62446e593a359eb9c4edc63d29" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.223585 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956b5524c78e24043e783a3ca1515db2e34bce62446e593a359eb9c4edc63d29"} err="failed to get container status \"956b5524c78e24043e783a3ca1515db2e34bce62446e593a359eb9c4edc63d29\": rpc error: code = NotFound desc = could not find container \"956b5524c78e24043e783a3ca1515db2e34bce62446e593a359eb9c4edc63d29\": container with ID starting with 956b5524c78e24043e783a3ca1515db2e34bce62446e593a359eb9c4edc63d29 not found: ID does not exist" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.223609 4802 scope.go:117] "RemoveContainer" containerID="2c19364b47012b70d2f16c300cd4116d19e134e4262ef419fc88cb8c878ad046" Oct 04 05:08:55 crc kubenswrapper[4802]: E1004 05:08:55.223938 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c19364b47012b70d2f16c300cd4116d19e134e4262ef419fc88cb8c878ad046\": container with ID starting with 2c19364b47012b70d2f16c300cd4116d19e134e4262ef419fc88cb8c878ad046 not found: ID does not exist" containerID="2c19364b47012b70d2f16c300cd4116d19e134e4262ef419fc88cb8c878ad046" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.223971 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c19364b47012b70d2f16c300cd4116d19e134e4262ef419fc88cb8c878ad046"} err="failed to get container status \"2c19364b47012b70d2f16c300cd4116d19e134e4262ef419fc88cb8c878ad046\": rpc error: code = NotFound desc = could not find container \"2c19364b47012b70d2f16c300cd4116d19e134e4262ef419fc88cb8c878ad046\": container with ID starting with 2c19364b47012b70d2f16c300cd4116d19e134e4262ef419fc88cb8c878ad046 not found: ID does not exist" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.258723 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-config" (OuterVolumeSpecName: "config") pod "fd505bc4-f9fa-4e50-a094-8f46c2d592a0" (UID: "fd505bc4-f9fa-4e50-a094-8f46c2d592a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.263237 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd505bc4-f9fa-4e50-a094-8f46c2d592a0" (UID: "fd505bc4-f9fa-4e50-a094-8f46c2d592a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.264218 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fd505bc4-f9fa-4e50-a094-8f46c2d592a0" (UID: "fd505bc4-f9fa-4e50-a094-8f46c2d592a0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.275959 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fd505bc4-f9fa-4e50-a094-8f46c2d592a0" (UID: "fd505bc4-f9fa-4e50-a094-8f46c2d592a0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.312025 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.312067 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.312083 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcgtn\" (UniqueName: \"kubernetes.io/projected/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-kube-api-access-jcgtn\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.312097 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.312108 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd505bc4-f9fa-4e50-a094-8f46c2d592a0-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.320850 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.321055 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="48596acf-5a97-46e9-a84c-5c4f8e07a998" containerName="nova-api-log" containerID="cri-o://4bc32f197b4992b1b1578f1042653adfedfa919d8468fe73dcff8231ebb786be" gracePeriod=30 Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.321337 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="48596acf-5a97-46e9-a84c-5c4f8e07a998" containerName="nova-api-api" containerID="cri-o://66a4397db3aa3b564541c72bf6078b02fa554f07664f5d5cd2c9f149e9548cf7" gracePeriod=30 Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.327500 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="48596acf-5a97-46e9-a84c-5c4f8e07a998" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.173:8774/\": EOF" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.327573 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="48596acf-5a97-46e9-a84c-5c4f8e07a998" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.173:8774/\": EOF" Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.396459 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.396728 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e2bdc62d-0b58-4043-9495-94a20c94bbe1" containerName="nova-metadata-log" containerID="cri-o://ba0097160e876cbc26b3f0779cd13544dff06b14bcbf668b813a0d62b3866f26" gracePeriod=30 Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.396756 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e2bdc62d-0b58-4043-9495-94a20c94bbe1" containerName="nova-metadata-metadata" containerID="cri-o://d70ee0d204b3b27bafba09d20c89fd48774a0bb5c1c4ed58aef05074f89abc34" gracePeriod=30 Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.575921 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-h9t6h"] Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.585176 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-h9t6h"] Oct 04 05:08:55 crc kubenswrapper[4802]: I1004 05:08:55.656168 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.168892 4802 generic.go:334] "Generic (PLEG): container finished" podID="48596acf-5a97-46e9-a84c-5c4f8e07a998" containerID="4bc32f197b4992b1b1578f1042653adfedfa919d8468fe73dcff8231ebb786be" exitCode=143 Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.168997 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48596acf-5a97-46e9-a84c-5c4f8e07a998","Type":"ContainerDied","Data":"4bc32f197b4992b1b1578f1042653adfedfa919d8468fe73dcff8231ebb786be"} Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.171701 4802 generic.go:334] "Generic (PLEG): container finished" podID="e2bdc62d-0b58-4043-9495-94a20c94bbe1" containerID="d70ee0d204b3b27bafba09d20c89fd48774a0bb5c1c4ed58aef05074f89abc34" exitCode=0 Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.171730 4802 generic.go:334] "Generic (PLEG): container finished" podID="e2bdc62d-0b58-4043-9495-94a20c94bbe1" containerID="ba0097160e876cbc26b3f0779cd13544dff06b14bcbf668b813a0d62b3866f26" exitCode=143 Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.171785 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2bdc62d-0b58-4043-9495-94a20c94bbe1","Type":"ContainerDied","Data":"d70ee0d204b3b27bafba09d20c89fd48774a0bb5c1c4ed58aef05074f89abc34"} Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.171822 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2bdc62d-0b58-4043-9495-94a20c94bbe1","Type":"ContainerDied","Data":"ba0097160e876cbc26b3f0779cd13544dff06b14bcbf668b813a0d62b3866f26"} Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.383140 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd505bc4-f9fa-4e50-a094-8f46c2d592a0" path="/var/lib/kubelet/pods/fd505bc4-f9fa-4e50-a094-8f46c2d592a0/volumes" Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.418029 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.537537 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2bdc62d-0b58-4043-9495-94a20c94bbe1-logs\") pod \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.537610 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drdz6\" (UniqueName: \"kubernetes.io/projected/e2bdc62d-0b58-4043-9495-94a20c94bbe1-kube-api-access-drdz6\") pod \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.537684 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-nova-metadata-tls-certs\") pod \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.537738 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-combined-ca-bundle\") pod \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.537824 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-config-data\") pod \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\" (UID: \"e2bdc62d-0b58-4043-9495-94a20c94bbe1\") " Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.537941 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2bdc62d-0b58-4043-9495-94a20c94bbe1-logs" (OuterVolumeSpecName: "logs") pod "e2bdc62d-0b58-4043-9495-94a20c94bbe1" (UID: "e2bdc62d-0b58-4043-9495-94a20c94bbe1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.538165 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2bdc62d-0b58-4043-9495-94a20c94bbe1-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.545862 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2bdc62d-0b58-4043-9495-94a20c94bbe1-kube-api-access-drdz6" (OuterVolumeSpecName: "kube-api-access-drdz6") pod "e2bdc62d-0b58-4043-9495-94a20c94bbe1" (UID: "e2bdc62d-0b58-4043-9495-94a20c94bbe1"). InnerVolumeSpecName "kube-api-access-drdz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.569820 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2bdc62d-0b58-4043-9495-94a20c94bbe1" (UID: "e2bdc62d-0b58-4043-9495-94a20c94bbe1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.574063 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-config-data" (OuterVolumeSpecName: "config-data") pod "e2bdc62d-0b58-4043-9495-94a20c94bbe1" (UID: "e2bdc62d-0b58-4043-9495-94a20c94bbe1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.599081 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e2bdc62d-0b58-4043-9495-94a20c94bbe1" (UID: "e2bdc62d-0b58-4043-9495-94a20c94bbe1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.639685 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drdz6\" (UniqueName: \"kubernetes.io/projected/e2bdc62d-0b58-4043-9495-94a20c94bbe1-kube-api-access-drdz6\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.639731 4802 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.639744 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:56 crc kubenswrapper[4802]: I1004 05:08:56.639756 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bdc62d-0b58-4043-9495-94a20c94bbe1-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.189264 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2bdc62d-0b58-4043-9495-94a20c94bbe1","Type":"ContainerDied","Data":"d1a30aeec8d02714cc55eafa322848618362cbeda0431c2751f7b4eb18bb2fcf"} Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.189737 4802 scope.go:117] "RemoveContainer" containerID="d70ee0d204b3b27bafba09d20c89fd48774a0bb5c1c4ed58aef05074f89abc34" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.189292 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.192583 4802 generic.go:334] "Generic (PLEG): container finished" podID="a9896243-f600-4461-ac5c-e22070c86c51" containerID="42f1ad68781c477f6ed3ed9eee01618c0deb791f59c951ba4f9302347df49f38" exitCode=0 Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.192753 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="070ca973-8f00-47fe-9552-99d45c1c0ec0" containerName="nova-scheduler-scheduler" containerID="cri-o://6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25" gracePeriod=30 Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.192978 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7pgm8" event={"ID":"a9896243-f600-4461-ac5c-e22070c86c51","Type":"ContainerDied","Data":"42f1ad68781c477f6ed3ed9eee01618c0deb791f59c951ba4f9302347df49f38"} Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.215715 4802 scope.go:117] "RemoveContainer" containerID="ba0097160e876cbc26b3f0779cd13544dff06b14bcbf668b813a0d62b3866f26" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.252701 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.267287 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.280785 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:57 crc kubenswrapper[4802]: E1004 05:08:57.281617 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd505bc4-f9fa-4e50-a094-8f46c2d592a0" containerName="init" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.281635 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd505bc4-f9fa-4e50-a094-8f46c2d592a0" containerName="init" Oct 04 05:08:57 crc kubenswrapper[4802]: E1004 05:08:57.281723 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac00fcb-556f-496a-85e6-50e1985c617a" containerName="nova-manage" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.281741 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac00fcb-556f-496a-85e6-50e1985c617a" containerName="nova-manage" Oct 04 05:08:57 crc kubenswrapper[4802]: E1004 05:08:57.281764 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bdc62d-0b58-4043-9495-94a20c94bbe1" containerName="nova-metadata-metadata" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.281773 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bdc62d-0b58-4043-9495-94a20c94bbe1" containerName="nova-metadata-metadata" Oct 04 05:08:57 crc kubenswrapper[4802]: E1004 05:08:57.281795 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd505bc4-f9fa-4e50-a094-8f46c2d592a0" containerName="dnsmasq-dns" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.281801 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd505bc4-f9fa-4e50-a094-8f46c2d592a0" containerName="dnsmasq-dns" Oct 04 05:08:57 crc kubenswrapper[4802]: E1004 05:08:57.281816 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bdc62d-0b58-4043-9495-94a20c94bbe1" containerName="nova-metadata-log" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.281823 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bdc62d-0b58-4043-9495-94a20c94bbe1" containerName="nova-metadata-log" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.282368 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bdc62d-0b58-4043-9495-94a20c94bbe1" containerName="nova-metadata-metadata" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.282496 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd505bc4-f9fa-4e50-a094-8f46c2d592a0" containerName="dnsmasq-dns" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.282505 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac00fcb-556f-496a-85e6-50e1985c617a" containerName="nova-manage" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.282523 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bdc62d-0b58-4043-9495-94a20c94bbe1" containerName="nova-metadata-log" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.298332 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.306433 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.306545 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.322279 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.352909 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.352952 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0580afa6-ca5b-412b-98cb-734acd556bb8-logs\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.352986 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-config-data\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.353050 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2g44\" (UniqueName: \"kubernetes.io/projected/0580afa6-ca5b-412b-98cb-734acd556bb8-kube-api-access-n2g44\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.353087 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.453739 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.453844 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0580afa6-ca5b-412b-98cb-734acd556bb8-logs\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.453862 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.453877 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-config-data\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.453930 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2g44\" (UniqueName: \"kubernetes.io/projected/0580afa6-ca5b-412b-98cb-734acd556bb8-kube-api-access-n2g44\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.456490 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0580afa6-ca5b-412b-98cb-734acd556bb8-logs\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.459393 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-config-data\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.459976 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.468918 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.479182 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2g44\" (UniqueName: \"kubernetes.io/projected/0580afa6-ca5b-412b-98cb-734acd556bb8-kube-api-access-n2g44\") pod \"nova-metadata-0\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " pod="openstack/nova-metadata-0" Oct 04 05:08:57 crc kubenswrapper[4802]: I1004 05:08:57.619628 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.046810 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.206205 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0580afa6-ca5b-412b-98cb-734acd556bb8","Type":"ContainerStarted","Data":"ea1b1158cfa0c7afb65c540faf97bcf7b66e23e640b71305665f78abbbce3a93"} Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.206259 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0580afa6-ca5b-412b-98cb-734acd556bb8","Type":"ContainerStarted","Data":"1dcf666c9b9a885038e8c5cf6302cd019f404ed4dba23a4f0b1043697d1e55aa"} Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.377175 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2bdc62d-0b58-4043-9495-94a20c94bbe1" path="/var/lib/kubelet/pods/e2bdc62d-0b58-4043-9495-94a20c94bbe1/volumes" Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.544376 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.572358 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-scripts\") pod \"a9896243-f600-4461-ac5c-e22070c86c51\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.572421 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-combined-ca-bundle\") pod \"a9896243-f600-4461-ac5c-e22070c86c51\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.572520 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d5mq\" (UniqueName: \"kubernetes.io/projected/a9896243-f600-4461-ac5c-e22070c86c51-kube-api-access-8d5mq\") pod \"a9896243-f600-4461-ac5c-e22070c86c51\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.577962 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9896243-f600-4461-ac5c-e22070c86c51-kube-api-access-8d5mq" (OuterVolumeSpecName: "kube-api-access-8d5mq") pod "a9896243-f600-4461-ac5c-e22070c86c51" (UID: "a9896243-f600-4461-ac5c-e22070c86c51"). InnerVolumeSpecName "kube-api-access-8d5mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.580994 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-config-data\") pod \"a9896243-f600-4461-ac5c-e22070c86c51\" (UID: \"a9896243-f600-4461-ac5c-e22070c86c51\") " Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.582494 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d5mq\" (UniqueName: \"kubernetes.io/projected/a9896243-f600-4461-ac5c-e22070c86c51-kube-api-access-8d5mq\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.585746 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-scripts" (OuterVolumeSpecName: "scripts") pod "a9896243-f600-4461-ac5c-e22070c86c51" (UID: "a9896243-f600-4461-ac5c-e22070c86c51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.608630 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-config-data" (OuterVolumeSpecName: "config-data") pod "a9896243-f600-4461-ac5c-e22070c86c51" (UID: "a9896243-f600-4461-ac5c-e22070c86c51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.621440 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9896243-f600-4461-ac5c-e22070c86c51" (UID: "a9896243-f600-4461-ac5c-e22070c86c51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.683621 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.683672 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:58 crc kubenswrapper[4802]: I1004 05:08:58.683691 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9896243-f600-4461-ac5c-e22070c86c51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.214595 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0580afa6-ca5b-412b-98cb-734acd556bb8","Type":"ContainerStarted","Data":"ba022423de39b0350bd5873933471215781db1a1e83a712b6612e73d58420b1e"} Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.216805 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7pgm8" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.216833 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7pgm8" event={"ID":"a9896243-f600-4461-ac5c-e22070c86c51","Type":"ContainerDied","Data":"54f9088880a2568e7025f04c21ef446a7efacc652ae79fc5ed3c6e03355ce9c3"} Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.217138 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54f9088880a2568e7025f04c21ef446a7efacc652ae79fc5ed3c6e03355ce9c3" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.256548 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.256509802 podStartE2EDuration="2.256509802s" podCreationTimestamp="2025-10-04 05:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:08:59.236617426 +0000 UTC m=+1381.644618141" watchObservedRunningTime="2025-10-04 05:08:59.256509802 +0000 UTC m=+1381.664510437" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.300410 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 04 05:08:59 crc kubenswrapper[4802]: E1004 05:08:59.300892 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9896243-f600-4461-ac5c-e22070c86c51" containerName="nova-cell1-conductor-db-sync" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.300912 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9896243-f600-4461-ac5c-e22070c86c51" containerName="nova-cell1-conductor-db-sync" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.301150 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9896243-f600-4461-ac5c-e22070c86c51" containerName="nova-cell1-conductor-db-sync" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.301789 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.305578 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.314778 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.398440 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rf4h\" (UniqueName: \"kubernetes.io/projected/98fc65e1-5e5c-42c7-8902-34e1aa56519e-kube-api-access-6rf4h\") pod \"nova-cell1-conductor-0\" (UID: \"98fc65e1-5e5c-42c7-8902-34e1aa56519e\") " pod="openstack/nova-cell1-conductor-0" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.398527 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fc65e1-5e5c-42c7-8902-34e1aa56519e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"98fc65e1-5e5c-42c7-8902-34e1aa56519e\") " pod="openstack/nova-cell1-conductor-0" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.398566 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fc65e1-5e5c-42c7-8902-34e1aa56519e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"98fc65e1-5e5c-42c7-8902-34e1aa56519e\") " pod="openstack/nova-cell1-conductor-0" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.501122 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rf4h\" (UniqueName: \"kubernetes.io/projected/98fc65e1-5e5c-42c7-8902-34e1aa56519e-kube-api-access-6rf4h\") pod \"nova-cell1-conductor-0\" (UID: \"98fc65e1-5e5c-42c7-8902-34e1aa56519e\") " pod="openstack/nova-cell1-conductor-0" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.501526 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fc65e1-5e5c-42c7-8902-34e1aa56519e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"98fc65e1-5e5c-42c7-8902-34e1aa56519e\") " pod="openstack/nova-cell1-conductor-0" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.501731 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fc65e1-5e5c-42c7-8902-34e1aa56519e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"98fc65e1-5e5c-42c7-8902-34e1aa56519e\") " pod="openstack/nova-cell1-conductor-0" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.506440 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fc65e1-5e5c-42c7-8902-34e1aa56519e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"98fc65e1-5e5c-42c7-8902-34e1aa56519e\") " pod="openstack/nova-cell1-conductor-0" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.507733 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fc65e1-5e5c-42c7-8902-34e1aa56519e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"98fc65e1-5e5c-42c7-8902-34e1aa56519e\") " pod="openstack/nova-cell1-conductor-0" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.518713 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rf4h\" (UniqueName: \"kubernetes.io/projected/98fc65e1-5e5c-42c7-8902-34e1aa56519e-kube-api-access-6rf4h\") pod \"nova-cell1-conductor-0\" (UID: \"98fc65e1-5e5c-42c7-8902-34e1aa56519e\") " pod="openstack/nova-cell1-conductor-0" Oct 04 05:08:59 crc kubenswrapper[4802]: E1004 05:08:59.603412 4802 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 04 05:08:59 crc kubenswrapper[4802]: E1004 05:08:59.605076 4802 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 04 05:08:59 crc kubenswrapper[4802]: E1004 05:08:59.606326 4802 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 04 05:08:59 crc kubenswrapper[4802]: E1004 05:08:59.606374 4802 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="070ca973-8f00-47fe-9552-99d45c1c0ec0" containerName="nova-scheduler-scheduler" Oct 04 05:08:59 crc kubenswrapper[4802]: I1004 05:08:59.665695 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 04 05:09:00 crc kubenswrapper[4802]: I1004 05:09:00.102828 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 04 05:09:00 crc kubenswrapper[4802]: I1004 05:09:00.226820 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"98fc65e1-5e5c-42c7-8902-34e1aa56519e","Type":"ContainerStarted","Data":"6c805925967b93af99f0641b739ca6358132e90eb523ab2e9c551a8a24b8d531"} Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.240537 4802 generic.go:334] "Generic (PLEG): container finished" podID="48596acf-5a97-46e9-a84c-5c4f8e07a998" containerID="66a4397db3aa3b564541c72bf6078b02fa554f07664f5d5cd2c9f149e9548cf7" exitCode=0 Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.240875 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48596acf-5a97-46e9-a84c-5c4f8e07a998","Type":"ContainerDied","Data":"66a4397db3aa3b564541c72bf6078b02fa554f07664f5d5cd2c9f149e9548cf7"} Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.240901 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48596acf-5a97-46e9-a84c-5c4f8e07a998","Type":"ContainerDied","Data":"f09b2619b67a70defc5049679288aadf251c6166cc05960608bac4a34d2cbe21"} Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.240910 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f09b2619b67a70defc5049679288aadf251c6166cc05960608bac4a34d2cbe21" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.245721 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"98fc65e1-5e5c-42c7-8902-34e1aa56519e","Type":"ContainerStarted","Data":"b7e5044e7a62809b1eaa926c4b66ac37eba849f7fcf5161d60d2a4d3239785fc"} Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.246712 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.263850 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.26382637 podStartE2EDuration="2.26382637s" podCreationTimestamp="2025-10-04 05:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:09:01.262166383 +0000 UTC m=+1383.670167038" watchObservedRunningTime="2025-10-04 05:09:01.26382637 +0000 UTC m=+1383.671826995" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.273658 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.337391 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48596acf-5a97-46e9-a84c-5c4f8e07a998-combined-ca-bundle\") pod \"48596acf-5a97-46e9-a84c-5c4f8e07a998\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.337534 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48596acf-5a97-46e9-a84c-5c4f8e07a998-config-data\") pod \"48596acf-5a97-46e9-a84c-5c4f8e07a998\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.337666 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwblr\" (UniqueName: \"kubernetes.io/projected/48596acf-5a97-46e9-a84c-5c4f8e07a998-kube-api-access-pwblr\") pod \"48596acf-5a97-46e9-a84c-5c4f8e07a998\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.337755 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48596acf-5a97-46e9-a84c-5c4f8e07a998-logs\") pod \"48596acf-5a97-46e9-a84c-5c4f8e07a998\" (UID: \"48596acf-5a97-46e9-a84c-5c4f8e07a998\") " Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.341595 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48596acf-5a97-46e9-a84c-5c4f8e07a998-logs" (OuterVolumeSpecName: "logs") pod "48596acf-5a97-46e9-a84c-5c4f8e07a998" (UID: "48596acf-5a97-46e9-a84c-5c4f8e07a998"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.349137 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48596acf-5a97-46e9-a84c-5c4f8e07a998-kube-api-access-pwblr" (OuterVolumeSpecName: "kube-api-access-pwblr") pod "48596acf-5a97-46e9-a84c-5c4f8e07a998" (UID: "48596acf-5a97-46e9-a84c-5c4f8e07a998"). InnerVolumeSpecName "kube-api-access-pwblr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.373947 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48596acf-5a97-46e9-a84c-5c4f8e07a998-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48596acf-5a97-46e9-a84c-5c4f8e07a998" (UID: "48596acf-5a97-46e9-a84c-5c4f8e07a998"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.375904 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48596acf-5a97-46e9-a84c-5c4f8e07a998-config-data" (OuterVolumeSpecName: "config-data") pod "48596acf-5a97-46e9-a84c-5c4f8e07a998" (UID: "48596acf-5a97-46e9-a84c-5c4f8e07a998"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.439604 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48596acf-5a97-46e9-a84c-5c4f8e07a998-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.439634 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48596acf-5a97-46e9-a84c-5c4f8e07a998-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.439708 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48596acf-5a97-46e9-a84c-5c4f8e07a998-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.439742 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwblr\" (UniqueName: \"kubernetes.io/projected/48596acf-5a97-46e9-a84c-5c4f8e07a998-kube-api-access-pwblr\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:01 crc kubenswrapper[4802]: E1004 05:09:01.755391 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9896243_f600_4461_ac5c_e22070c86c51.slice/crio-54f9088880a2568e7025f04c21ef446a7efacc652ae79fc5ed3c6e03355ce9c3\": RecentStats: unable to find data in memory cache]" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.871479 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.947195 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pzgv\" (UniqueName: \"kubernetes.io/projected/070ca973-8f00-47fe-9552-99d45c1c0ec0-kube-api-access-2pzgv\") pod \"070ca973-8f00-47fe-9552-99d45c1c0ec0\" (UID: \"070ca973-8f00-47fe-9552-99d45c1c0ec0\") " Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.947470 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070ca973-8f00-47fe-9552-99d45c1c0ec0-combined-ca-bundle\") pod \"070ca973-8f00-47fe-9552-99d45c1c0ec0\" (UID: \"070ca973-8f00-47fe-9552-99d45c1c0ec0\") " Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.947583 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070ca973-8f00-47fe-9552-99d45c1c0ec0-config-data\") pod \"070ca973-8f00-47fe-9552-99d45c1c0ec0\" (UID: \"070ca973-8f00-47fe-9552-99d45c1c0ec0\") " Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.951187 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070ca973-8f00-47fe-9552-99d45c1c0ec0-kube-api-access-2pzgv" (OuterVolumeSpecName: "kube-api-access-2pzgv") pod "070ca973-8f00-47fe-9552-99d45c1c0ec0" (UID: "070ca973-8f00-47fe-9552-99d45c1c0ec0"). InnerVolumeSpecName "kube-api-access-2pzgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.974399 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070ca973-8f00-47fe-9552-99d45c1c0ec0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "070ca973-8f00-47fe-9552-99d45c1c0ec0" (UID: "070ca973-8f00-47fe-9552-99d45c1c0ec0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:01 crc kubenswrapper[4802]: I1004 05:09:01.977339 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070ca973-8f00-47fe-9552-99d45c1c0ec0-config-data" (OuterVolumeSpecName: "config-data") pod "070ca973-8f00-47fe-9552-99d45c1c0ec0" (UID: "070ca973-8f00-47fe-9552-99d45c1c0ec0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.049976 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pzgv\" (UniqueName: \"kubernetes.io/projected/070ca973-8f00-47fe-9552-99d45c1c0ec0-kube-api-access-2pzgv\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.050018 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070ca973-8f00-47fe-9552-99d45c1c0ec0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.050032 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070ca973-8f00-47fe-9552-99d45c1c0ec0-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.260588 4802 generic.go:334] "Generic (PLEG): container finished" podID="070ca973-8f00-47fe-9552-99d45c1c0ec0" containerID="6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25" exitCode=0 Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.260684 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.261581 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.260703 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"070ca973-8f00-47fe-9552-99d45c1c0ec0","Type":"ContainerDied","Data":"6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25"} Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.261699 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"070ca973-8f00-47fe-9552-99d45c1c0ec0","Type":"ContainerDied","Data":"a5b1a2f6f7f102effdd5079a1434d21f37e0d8013ea470f61a05a67247ce6438"} Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.261720 4802 scope.go:117] "RemoveContainer" containerID="6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.286113 4802 scope.go:117] "RemoveContainer" containerID="6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25" Oct 04 05:09:02 crc kubenswrapper[4802]: E1004 05:09:02.286470 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25\": container with ID starting with 6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25 not found: ID does not exist" containerID="6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.286496 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25"} err="failed to get container status \"6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25\": rpc error: code = NotFound desc = could not find container \"6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25\": container with ID starting with 6c7f22e7d2d15efe2c2072930649e6362e4af03014b223770ee496a9adee6c25 not found: ID does not exist" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.310035 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.324198 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.337459 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.352630 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.358893 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:02 crc kubenswrapper[4802]: E1004 05:09:02.359376 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48596acf-5a97-46e9-a84c-5c4f8e07a998" containerName="nova-api-api" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.359396 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="48596acf-5a97-46e9-a84c-5c4f8e07a998" containerName="nova-api-api" Oct 04 05:09:02 crc kubenswrapper[4802]: E1004 05:09:02.359418 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070ca973-8f00-47fe-9552-99d45c1c0ec0" containerName="nova-scheduler-scheduler" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.359427 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="070ca973-8f00-47fe-9552-99d45c1c0ec0" containerName="nova-scheduler-scheduler" Oct 04 05:09:02 crc kubenswrapper[4802]: E1004 05:09:02.359440 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48596acf-5a97-46e9-a84c-5c4f8e07a998" containerName="nova-api-log" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.359448 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="48596acf-5a97-46e9-a84c-5c4f8e07a998" containerName="nova-api-log" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.359665 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="070ca973-8f00-47fe-9552-99d45c1c0ec0" containerName="nova-scheduler-scheduler" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.359691 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="48596acf-5a97-46e9-a84c-5c4f8e07a998" containerName="nova-api-log" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.359723 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="48596acf-5a97-46e9-a84c-5c4f8e07a998" containerName="nova-api-api" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.361272 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.398425 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.413506 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="070ca973-8f00-47fe-9552-99d45c1c0ec0" path="/var/lib/kubelet/pods/070ca973-8f00-47fe-9552-99d45c1c0ec0/volumes" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.421714 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48596acf-5a97-46e9-a84c-5c4f8e07a998" path="/var/lib/kubelet/pods/48596acf-5a97-46e9-a84c-5c4f8e07a998/volumes" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.422382 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.422414 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.424408 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.431181 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.438686 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.461807 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6866082e-f58d-4942-b27e-7f13543dbcd3-config-data\") pod \"nova-scheduler-0\" (UID: \"6866082e-f58d-4942-b27e-7f13543dbcd3\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.461866 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6866082e-f58d-4942-b27e-7f13543dbcd3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6866082e-f58d-4942-b27e-7f13543dbcd3\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.461900 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5qxp\" (UniqueName: \"kubernetes.io/projected/6866082e-f58d-4942-b27e-7f13543dbcd3-kube-api-access-f5qxp\") pod \"nova-scheduler-0\" (UID: \"6866082e-f58d-4942-b27e-7f13543dbcd3\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.461991 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1ff060-6576-403c-a4c2-9f27272f57de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.462025 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nctj\" (UniqueName: \"kubernetes.io/projected/8c1ff060-6576-403c-a4c2-9f27272f57de-kube-api-access-2nctj\") pod \"nova-api-0\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.462046 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c1ff060-6576-403c-a4c2-9f27272f57de-config-data\") pod \"nova-api-0\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.462120 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c1ff060-6576-403c-a4c2-9f27272f57de-logs\") pod \"nova-api-0\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.564741 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c1ff060-6576-403c-a4c2-9f27272f57de-logs\") pod \"nova-api-0\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.564833 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6866082e-f58d-4942-b27e-7f13543dbcd3-config-data\") pod \"nova-scheduler-0\" (UID: \"6866082e-f58d-4942-b27e-7f13543dbcd3\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.564865 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6866082e-f58d-4942-b27e-7f13543dbcd3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6866082e-f58d-4942-b27e-7f13543dbcd3\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.564897 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5qxp\" (UniqueName: \"kubernetes.io/projected/6866082e-f58d-4942-b27e-7f13543dbcd3-kube-api-access-f5qxp\") pod \"nova-scheduler-0\" (UID: \"6866082e-f58d-4942-b27e-7f13543dbcd3\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.564975 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1ff060-6576-403c-a4c2-9f27272f57de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.565007 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nctj\" (UniqueName: \"kubernetes.io/projected/8c1ff060-6576-403c-a4c2-9f27272f57de-kube-api-access-2nctj\") pod \"nova-api-0\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.565025 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c1ff060-6576-403c-a4c2-9f27272f57de-config-data\") pod \"nova-api-0\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.567455 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c1ff060-6576-403c-a4c2-9f27272f57de-logs\") pod \"nova-api-0\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.570313 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6866082e-f58d-4942-b27e-7f13543dbcd3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6866082e-f58d-4942-b27e-7f13543dbcd3\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.574187 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6866082e-f58d-4942-b27e-7f13543dbcd3-config-data\") pod \"nova-scheduler-0\" (UID: \"6866082e-f58d-4942-b27e-7f13543dbcd3\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.574400 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c1ff060-6576-403c-a4c2-9f27272f57de-config-data\") pod \"nova-api-0\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.574690 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1ff060-6576-403c-a4c2-9f27272f57de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.589354 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5qxp\" (UniqueName: \"kubernetes.io/projected/6866082e-f58d-4942-b27e-7f13543dbcd3-kube-api-access-f5qxp\") pod \"nova-scheduler-0\" (UID: \"6866082e-f58d-4942-b27e-7f13543dbcd3\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.593212 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nctj\" (UniqueName: \"kubernetes.io/projected/8c1ff060-6576-403c-a4c2-9f27272f57de-kube-api-access-2nctj\") pod \"nova-api-0\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.620240 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.620282 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.693111 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:09:02 crc kubenswrapper[4802]: I1004 05:09:02.758340 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 04 05:09:03 crc kubenswrapper[4802]: I1004 05:09:03.186475 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:03 crc kubenswrapper[4802]: I1004 05:09:03.262964 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:09:03 crc kubenswrapper[4802]: I1004 05:09:03.274284 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c1ff060-6576-403c-a4c2-9f27272f57de","Type":"ContainerStarted","Data":"06e2f4099512841f0191e78c4e407870e73d033335900e14e17c28a78fb47116"} Oct 04 05:09:04 crc kubenswrapper[4802]: I1004 05:09:04.284135 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c1ff060-6576-403c-a4c2-9f27272f57de","Type":"ContainerStarted","Data":"6f4b0891eb079ead4f826090f55c619335b5d34b85aa2219b8b8c5faa5bf55aa"} Oct 04 05:09:04 crc kubenswrapper[4802]: I1004 05:09:04.284496 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c1ff060-6576-403c-a4c2-9f27272f57de","Type":"ContainerStarted","Data":"ec66d4445e3100bae26269c807feb66a0af862d1a0f0588f194b765642bb3906"} Oct 04 05:09:04 crc kubenswrapper[4802]: I1004 05:09:04.285584 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6866082e-f58d-4942-b27e-7f13543dbcd3","Type":"ContainerStarted","Data":"f08ff1d3f35180746849f4d44fbcfc0452d3c49d71114466f7f686da527589fc"} Oct 04 05:09:04 crc kubenswrapper[4802]: I1004 05:09:04.285608 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6866082e-f58d-4942-b27e-7f13543dbcd3","Type":"ContainerStarted","Data":"d32f6df112ac8edbdba29eb282e47181e5e40b9a4159f5fbfbbb6261a573f006"} Oct 04 05:09:04 crc kubenswrapper[4802]: I1004 05:09:04.303073 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.303057058 podStartE2EDuration="2.303057058s" podCreationTimestamp="2025-10-04 05:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:09:04.298255531 +0000 UTC m=+1386.706256166" watchObservedRunningTime="2025-10-04 05:09:04.303057058 +0000 UTC m=+1386.711057683" Oct 04 05:09:04 crc kubenswrapper[4802]: I1004 05:09:04.323011 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.322995625 podStartE2EDuration="2.322995625s" podCreationTimestamp="2025-10-04 05:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:09:04.320733801 +0000 UTC m=+1386.728734446" watchObservedRunningTime="2025-10-04 05:09:04.322995625 +0000 UTC m=+1386.730996250" Oct 04 05:09:07 crc kubenswrapper[4802]: I1004 05:09:07.620146 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 04 05:09:07 crc kubenswrapper[4802]: I1004 05:09:07.620479 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 04 05:09:07 crc kubenswrapper[4802]: I1004 05:09:07.758911 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 04 05:09:08 crc kubenswrapper[4802]: I1004 05:09:08.637877 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 04 05:09:08 crc kubenswrapper[4802]: I1004 05:09:08.637911 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 04 05:09:09 crc kubenswrapper[4802]: I1004 05:09:09.698178 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 04 05:09:11 crc kubenswrapper[4802]: I1004 05:09:11.250807 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 04 05:09:11 crc kubenswrapper[4802]: E1004 05:09:11.971900 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9896243_f600_4461_ac5c_e22070c86c51.slice/crio-54f9088880a2568e7025f04c21ef446a7efacc652ae79fc5ed3c6e03355ce9c3\": RecentStats: unable to find data in memory cache]" Oct 04 05:09:12 crc kubenswrapper[4802]: I1004 05:09:12.693600 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 04 05:09:12 crc kubenswrapper[4802]: I1004 05:09:12.693670 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 04 05:09:12 crc kubenswrapper[4802]: I1004 05:09:12.759337 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 04 05:09:12 crc kubenswrapper[4802]: I1004 05:09:12.789678 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 04 05:09:13 crc kubenswrapper[4802]: I1004 05:09:13.394390 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 04 05:09:13 crc kubenswrapper[4802]: I1004 05:09:13.775909 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8c1ff060-6576-403c-a4c2-9f27272f57de" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.180:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 04 05:09:13 crc kubenswrapper[4802]: I1004 05:09:13.775909 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8c1ff060-6576-403c-a4c2-9f27272f57de" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.180:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 04 05:09:17 crc kubenswrapper[4802]: I1004 05:09:17.628814 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 04 05:09:17 crc kubenswrapper[4802]: I1004 05:09:17.629201 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 04 05:09:17 crc kubenswrapper[4802]: I1004 05:09:17.638089 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 04 05:09:17 crc kubenswrapper[4802]: I1004 05:09:17.638365 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 04 05:09:19 crc kubenswrapper[4802]: I1004 05:09:19.406146 4802 generic.go:334] "Generic (PLEG): container finished" podID="96666d3e-d058-40bf-95b4-fa099cd8694d" containerID="f6d58e942f23c299050f0d5a9bbb191786883d98ed02def333191655d0c7fd46" exitCode=137 Oct 04 05:09:19 crc kubenswrapper[4802]: I1004 05:09:19.406229 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"96666d3e-d058-40bf-95b4-fa099cd8694d","Type":"ContainerDied","Data":"f6d58e942f23c299050f0d5a9bbb191786883d98ed02def333191655d0c7fd46"} Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.033081 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.082793 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8c6m\" (UniqueName: \"kubernetes.io/projected/96666d3e-d058-40bf-95b4-fa099cd8694d-kube-api-access-z8c6m\") pod \"96666d3e-d058-40bf-95b4-fa099cd8694d\" (UID: \"96666d3e-d058-40bf-95b4-fa099cd8694d\") " Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.082852 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96666d3e-d058-40bf-95b4-fa099cd8694d-config-data\") pod \"96666d3e-d058-40bf-95b4-fa099cd8694d\" (UID: \"96666d3e-d058-40bf-95b4-fa099cd8694d\") " Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.082938 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96666d3e-d058-40bf-95b4-fa099cd8694d-combined-ca-bundle\") pod \"96666d3e-d058-40bf-95b4-fa099cd8694d\" (UID: \"96666d3e-d058-40bf-95b4-fa099cd8694d\") " Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.090843 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96666d3e-d058-40bf-95b4-fa099cd8694d-kube-api-access-z8c6m" (OuterVolumeSpecName: "kube-api-access-z8c6m") pod "96666d3e-d058-40bf-95b4-fa099cd8694d" (UID: "96666d3e-d058-40bf-95b4-fa099cd8694d"). InnerVolumeSpecName "kube-api-access-z8c6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.108276 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96666d3e-d058-40bf-95b4-fa099cd8694d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96666d3e-d058-40bf-95b4-fa099cd8694d" (UID: "96666d3e-d058-40bf-95b4-fa099cd8694d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.108848 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96666d3e-d058-40bf-95b4-fa099cd8694d-config-data" (OuterVolumeSpecName: "config-data") pod "96666d3e-d058-40bf-95b4-fa099cd8694d" (UID: "96666d3e-d058-40bf-95b4-fa099cd8694d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.184838 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8c6m\" (UniqueName: \"kubernetes.io/projected/96666d3e-d058-40bf-95b4-fa099cd8694d-kube-api-access-z8c6m\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.184872 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96666d3e-d058-40bf-95b4-fa099cd8694d-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.184915 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96666d3e-d058-40bf-95b4-fa099cd8694d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.417290 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"96666d3e-d058-40bf-95b4-fa099cd8694d","Type":"ContainerDied","Data":"0b99eb213220edb7610ea7298daf77dcd7881019cbef0faa1996812978b864b8"} Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.417343 4802 scope.go:117] "RemoveContainer" containerID="f6d58e942f23c299050f0d5a9bbb191786883d98ed02def333191655d0c7fd46" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.418475 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.444729 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.456801 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.468977 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 04 05:09:20 crc kubenswrapper[4802]: E1004 05:09:20.469582 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96666d3e-d058-40bf-95b4-fa099cd8694d" containerName="nova-cell1-novncproxy-novncproxy" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.469712 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="96666d3e-d058-40bf-95b4-fa099cd8694d" containerName="nova-cell1-novncproxy-novncproxy" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.470015 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="96666d3e-d058-40bf-95b4-fa099cd8694d" containerName="nova-cell1-novncproxy-novncproxy" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.470785 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.473424 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.473601 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.473755 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.483205 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.590884 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srndv\" (UniqueName: \"kubernetes.io/projected/dac2cbfb-9cef-4319-92db-1c352393b407-kube-api-access-srndv\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.591037 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dac2cbfb-9cef-4319-92db-1c352393b407-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.591307 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/dac2cbfb-9cef-4319-92db-1c352393b407-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.591427 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/dac2cbfb-9cef-4319-92db-1c352393b407-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.591520 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dac2cbfb-9cef-4319-92db-1c352393b407-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.693602 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/dac2cbfb-9cef-4319-92db-1c352393b407-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.693673 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dac2cbfb-9cef-4319-92db-1c352393b407-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.693718 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srndv\" (UniqueName: \"kubernetes.io/projected/dac2cbfb-9cef-4319-92db-1c352393b407-kube-api-access-srndv\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.693756 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dac2cbfb-9cef-4319-92db-1c352393b407-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.693829 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/dac2cbfb-9cef-4319-92db-1c352393b407-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.697244 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/dac2cbfb-9cef-4319-92db-1c352393b407-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.698459 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/dac2cbfb-9cef-4319-92db-1c352393b407-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.700453 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dac2cbfb-9cef-4319-92db-1c352393b407-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.701329 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dac2cbfb-9cef-4319-92db-1c352393b407-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.716570 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srndv\" (UniqueName: \"kubernetes.io/projected/dac2cbfb-9cef-4319-92db-1c352393b407-kube-api-access-srndv\") pod \"nova-cell1-novncproxy-0\" (UID: \"dac2cbfb-9cef-4319-92db-1c352393b407\") " pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:20 crc kubenswrapper[4802]: I1004 05:09:20.792952 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:21 crc kubenswrapper[4802]: I1004 05:09:21.238718 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 04 05:09:21 crc kubenswrapper[4802]: I1004 05:09:21.427615 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dac2cbfb-9cef-4319-92db-1c352393b407","Type":"ContainerStarted","Data":"4144a165e24599d8ef2de8e0b51c0c60874fdf922ef79a06b4b2b4ff2665fce0"} Oct 04 05:09:21 crc kubenswrapper[4802]: I1004 05:09:21.427968 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dac2cbfb-9cef-4319-92db-1c352393b407","Type":"ContainerStarted","Data":"01cb5b1d63a4c0916b33074a6781b6a9356ab9cec3b41502c597204e9d545ea4"} Oct 04 05:09:21 crc kubenswrapper[4802]: I1004 05:09:21.454731 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.454704899 podStartE2EDuration="1.454704899s" podCreationTimestamp="2025-10-04 05:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:09:21.443262563 +0000 UTC m=+1403.851263208" watchObservedRunningTime="2025-10-04 05:09:21.454704899 +0000 UTC m=+1403.862705534" Oct 04 05:09:22 crc kubenswrapper[4802]: E1004 05:09:22.175748 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9896243_f600_4461_ac5c_e22070c86c51.slice/crio-54f9088880a2568e7025f04c21ef446a7efacc652ae79fc5ed3c6e03355ce9c3\": RecentStats: unable to find data in memory cache]" Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.371750 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96666d3e-d058-40bf-95b4-fa099cd8694d" path="/var/lib/kubelet/pods/96666d3e-d058-40bf-95b4-fa099cd8694d/volumes" Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.697793 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.697892 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.698300 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.698338 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.701110 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.702128 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.877502 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-4wll7"] Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.880340 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.889850 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-4wll7"] Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.937248 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.937328 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-config\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.937375 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcbgt\" (UniqueName: \"kubernetes.io/projected/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-kube-api-access-qcbgt\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.937421 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:22 crc kubenswrapper[4802]: I1004 05:09:22.937472 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-dns-svc\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:23 crc kubenswrapper[4802]: I1004 05:09:23.039277 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:23 crc kubenswrapper[4802]: I1004 05:09:23.039377 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-dns-svc\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:23 crc kubenswrapper[4802]: I1004 05:09:23.039441 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:23 crc kubenswrapper[4802]: I1004 05:09:23.039490 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-config\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:23 crc kubenswrapper[4802]: I1004 05:09:23.039537 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcbgt\" (UniqueName: \"kubernetes.io/projected/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-kube-api-access-qcbgt\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:23 crc kubenswrapper[4802]: I1004 05:09:23.040524 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-dns-svc\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:23 crc kubenswrapper[4802]: I1004 05:09:23.040585 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:23 crc kubenswrapper[4802]: I1004 05:09:23.040842 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-config\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:23 crc kubenswrapper[4802]: I1004 05:09:23.041202 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:23 crc kubenswrapper[4802]: I1004 05:09:23.063051 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcbgt\" (UniqueName: \"kubernetes.io/projected/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-kube-api-access-qcbgt\") pod \"dnsmasq-dns-5b856c5697-4wll7\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:23 crc kubenswrapper[4802]: I1004 05:09:23.219620 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:23 crc kubenswrapper[4802]: I1004 05:09:23.731176 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-4wll7"] Oct 04 05:09:23 crc kubenswrapper[4802]: W1004 05:09:23.737924 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0944ab8_e05e_4d57_ac11_1d81b8cbfd77.slice/crio-efe6bb2575384141404a7eb973cbdf0ee0632a181e6cd877dc8011b9dcb87de3 WatchSource:0}: Error finding container efe6bb2575384141404a7eb973cbdf0ee0632a181e6cd877dc8011b9dcb87de3: Status 404 returned error can't find the container with id efe6bb2575384141404a7eb973cbdf0ee0632a181e6cd877dc8011b9dcb87de3 Oct 04 05:09:24 crc kubenswrapper[4802]: I1004 05:09:24.464216 4802 generic.go:334] "Generic (PLEG): container finished" podID="a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" containerID="94611c693d92f43d69700800e03736ebb95443495561e6278ab6890dcef1ba7e" exitCode=0 Oct 04 05:09:24 crc kubenswrapper[4802]: I1004 05:09:24.464310 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-4wll7" event={"ID":"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77","Type":"ContainerDied","Data":"94611c693d92f43d69700800e03736ebb95443495561e6278ab6890dcef1ba7e"} Oct 04 05:09:24 crc kubenswrapper[4802]: I1004 05:09:24.464589 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-4wll7" event={"ID":"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77","Type":"ContainerStarted","Data":"efe6bb2575384141404a7eb973cbdf0ee0632a181e6cd877dc8011b9dcb87de3"} Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.071538 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.071832 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="ceilometer-central-agent" containerID="cri-o://78685555954ce23468095d54f93a56cd08d725f086dbf2bb1d465e3bae68b0fe" gracePeriod=30 Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.071952 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="proxy-httpd" containerID="cri-o://dc330fe3044f910c706ae3c60b332f433939af78a90be76bfead0857e1b499d6" gracePeriod=30 Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.071983 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="sg-core" containerID="cri-o://6c4e33375fc9af2f36499f2bac755ab3d5eedb0a12f8886190decd18014e9ce2" gracePeriod=30 Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.072013 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="ceilometer-notification-agent" containerID="cri-o://f374e654858d2f071d180e75cbf5bcf444b0d4fda19fc0c0ce21c04253cff8e5" gracePeriod=30 Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.203846 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.473603 4802 generic.go:334] "Generic (PLEG): container finished" podID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerID="dc330fe3044f910c706ae3c60b332f433939af78a90be76bfead0857e1b499d6" exitCode=0 Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.473683 4802 generic.go:334] "Generic (PLEG): container finished" podID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerID="6c4e33375fc9af2f36499f2bac755ab3d5eedb0a12f8886190decd18014e9ce2" exitCode=2 Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.473685 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f74aeeb-8b92-4e1f-8588-a41ee53f7259","Type":"ContainerDied","Data":"dc330fe3044f910c706ae3c60b332f433939af78a90be76bfead0857e1b499d6"} Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.473720 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f74aeeb-8b92-4e1f-8588-a41ee53f7259","Type":"ContainerDied","Data":"6c4e33375fc9af2f36499f2bac755ab3d5eedb0a12f8886190decd18014e9ce2"} Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.475594 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-4wll7" event={"ID":"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77","Type":"ContainerStarted","Data":"a1b342f50f2faa4dc256360676e4e542dd72d247b39df915b94b16cb3ececbfd"} Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.475740 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.475891 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8c1ff060-6576-403c-a4c2-9f27272f57de" containerName="nova-api-log" containerID="cri-o://ec66d4445e3100bae26269c807feb66a0af862d1a0f0588f194b765642bb3906" gracePeriod=30 Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.476040 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8c1ff060-6576-403c-a4c2-9f27272f57de" containerName="nova-api-api" containerID="cri-o://6f4b0891eb079ead4f826090f55c619335b5d34b85aa2219b8b8c5faa5bf55aa" gracePeriod=30 Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.501445 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-4wll7" podStartSLOduration=3.501429109 podStartE2EDuration="3.501429109s" podCreationTimestamp="2025-10-04 05:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:09:25.496145079 +0000 UTC m=+1407.904145724" watchObservedRunningTime="2025-10-04 05:09:25.501429109 +0000 UTC m=+1407.909429734" Oct 04 05:09:25 crc kubenswrapper[4802]: I1004 05:09:25.795662 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:26 crc kubenswrapper[4802]: I1004 05:09:26.489586 4802 generic.go:334] "Generic (PLEG): container finished" podID="8c1ff060-6576-403c-a4c2-9f27272f57de" containerID="ec66d4445e3100bae26269c807feb66a0af862d1a0f0588f194b765642bb3906" exitCode=143 Oct 04 05:09:26 crc kubenswrapper[4802]: I1004 05:09:26.489660 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c1ff060-6576-403c-a4c2-9f27272f57de","Type":"ContainerDied","Data":"ec66d4445e3100bae26269c807feb66a0af862d1a0f0588f194b765642bb3906"} Oct 04 05:09:26 crc kubenswrapper[4802]: I1004 05:09:26.492842 4802 generic.go:334] "Generic (PLEG): container finished" podID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerID="78685555954ce23468095d54f93a56cd08d725f086dbf2bb1d465e3bae68b0fe" exitCode=0 Oct 04 05:09:26 crc kubenswrapper[4802]: I1004 05:09:26.492920 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f74aeeb-8b92-4e1f-8588-a41ee53f7259","Type":"ContainerDied","Data":"78685555954ce23468095d54f93a56cd08d725f086dbf2bb1d465e3bae68b0fe"} Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.514378 4802 generic.go:334] "Generic (PLEG): container finished" podID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerID="f374e654858d2f071d180e75cbf5bcf444b0d4fda19fc0c0ce21c04253cff8e5" exitCode=0 Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.514450 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f74aeeb-8b92-4e1f-8588-a41ee53f7259","Type":"ContainerDied","Data":"f374e654858d2f071d180e75cbf5bcf444b0d4fda19fc0c0ce21c04253cff8e5"} Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.660011 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.725982 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-config-data\") pod \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.726067 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-sg-core-conf-yaml\") pod \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.726122 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm5f9\" (UniqueName: \"kubernetes.io/projected/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-kube-api-access-vm5f9\") pod \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.726255 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-log-httpd\") pod \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.726329 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-ceilometer-tls-certs\") pod \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.726367 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-combined-ca-bundle\") pod \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.726482 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-run-httpd\") pod \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.726511 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-scripts\") pod \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\" (UID: \"5f74aeeb-8b92-4e1f-8588-a41ee53f7259\") " Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.726859 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5f74aeeb-8b92-4e1f-8588-a41ee53f7259" (UID: "5f74aeeb-8b92-4e1f-8588-a41ee53f7259"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.727067 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5f74aeeb-8b92-4e1f-8588-a41ee53f7259" (UID: "5f74aeeb-8b92-4e1f-8588-a41ee53f7259"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.727285 4802 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.727312 4802 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.732688 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-scripts" (OuterVolumeSpecName: "scripts") pod "5f74aeeb-8b92-4e1f-8588-a41ee53f7259" (UID: "5f74aeeb-8b92-4e1f-8588-a41ee53f7259"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.757177 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5f74aeeb-8b92-4e1f-8588-a41ee53f7259" (UID: "5f74aeeb-8b92-4e1f-8588-a41ee53f7259"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.766171 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-kube-api-access-vm5f9" (OuterVolumeSpecName: "kube-api-access-vm5f9") pod "5f74aeeb-8b92-4e1f-8588-a41ee53f7259" (UID: "5f74aeeb-8b92-4e1f-8588-a41ee53f7259"). InnerVolumeSpecName "kube-api-access-vm5f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.789662 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5f74aeeb-8b92-4e1f-8588-a41ee53f7259" (UID: "5f74aeeb-8b92-4e1f-8588-a41ee53f7259"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.825447 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f74aeeb-8b92-4e1f-8588-a41ee53f7259" (UID: "5f74aeeb-8b92-4e1f-8588-a41ee53f7259"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.825940 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-config-data" (OuterVolumeSpecName: "config-data") pod "5f74aeeb-8b92-4e1f-8588-a41ee53f7259" (UID: "5f74aeeb-8b92-4e1f-8588-a41ee53f7259"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.829091 4802 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.829114 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.829125 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.829135 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.829142 4802 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:27 crc kubenswrapper[4802]: I1004 05:09:27.829150 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm5f9\" (UniqueName: \"kubernetes.io/projected/5f74aeeb-8b92-4e1f-8588-a41ee53f7259-kube-api-access-vm5f9\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.526932 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f74aeeb-8b92-4e1f-8588-a41ee53f7259","Type":"ContainerDied","Data":"609e70130c656efb34a8aa109b11411d8c716ece5bc9190154677125960d8607"} Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.526987 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.527389 4802 scope.go:117] "RemoveContainer" containerID="dc330fe3044f910c706ae3c60b332f433939af78a90be76bfead0857e1b499d6" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.555188 4802 scope.go:117] "RemoveContainer" containerID="6c4e33375fc9af2f36499f2bac755ab3d5eedb0a12f8886190decd18014e9ce2" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.558826 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.566413 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.581471 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:09:28 crc kubenswrapper[4802]: E1004 05:09:28.581993 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="ceilometer-notification-agent" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.582017 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="ceilometer-notification-agent" Oct 04 05:09:28 crc kubenswrapper[4802]: E1004 05:09:28.582043 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="ceilometer-central-agent" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.582051 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="ceilometer-central-agent" Oct 04 05:09:28 crc kubenswrapper[4802]: E1004 05:09:28.582069 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="proxy-httpd" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.582077 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="proxy-httpd" Oct 04 05:09:28 crc kubenswrapper[4802]: E1004 05:09:28.582087 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="sg-core" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.582094 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="sg-core" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.582321 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="ceilometer-notification-agent" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.582339 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="proxy-httpd" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.584814 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="sg-core" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.584838 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" containerName="ceilometer-central-agent" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.592512 4802 scope.go:117] "RemoveContainer" containerID="f374e654858d2f071d180e75cbf5bcf444b0d4fda19fc0c0ce21c04253cff8e5" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.612578 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.613000 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.615536 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.616517 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.616763 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.644863 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.644919 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-config-data\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.644986 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.645009 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edd0a556-bfd5-46dc-aa86-63cfc060baf6-run-httpd\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.645038 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.645065 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-scripts\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.645146 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f2ht\" (UniqueName: \"kubernetes.io/projected/edd0a556-bfd5-46dc-aa86-63cfc060baf6-kube-api-access-5f2ht\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.645169 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edd0a556-bfd5-46dc-aa86-63cfc060baf6-log-httpd\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.656488 4802 scope.go:117] "RemoveContainer" containerID="78685555954ce23468095d54f93a56cd08d725f086dbf2bb1d465e3bae68b0fe" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.747350 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.747560 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edd0a556-bfd5-46dc-aa86-63cfc060baf6-run-httpd\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.747698 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.747824 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-scripts\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.748039 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f2ht\" (UniqueName: \"kubernetes.io/projected/edd0a556-bfd5-46dc-aa86-63cfc060baf6-kube-api-access-5f2ht\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.748138 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edd0a556-bfd5-46dc-aa86-63cfc060baf6-log-httpd\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.748277 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.749371 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-config-data\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.749330 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edd0a556-bfd5-46dc-aa86-63cfc060baf6-log-httpd\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.748338 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edd0a556-bfd5-46dc-aa86-63cfc060baf6-run-httpd\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.751755 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.751793 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.752517 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-scripts\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.752855 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.754550 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-config-data\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.767277 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f2ht\" (UniqueName: \"kubernetes.io/projected/edd0a556-bfd5-46dc-aa86-63cfc060baf6-kube-api-access-5f2ht\") pod \"ceilometer-0\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " pod="openstack/ceilometer-0" Oct 04 05:09:28 crc kubenswrapper[4802]: I1004 05:09:28.990076 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.050287 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.057326 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c1ff060-6576-403c-a4c2-9f27272f57de-config-data\") pod \"8c1ff060-6576-403c-a4c2-9f27272f57de\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.057490 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1ff060-6576-403c-a4c2-9f27272f57de-combined-ca-bundle\") pod \"8c1ff060-6576-403c-a4c2-9f27272f57de\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.057633 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c1ff060-6576-403c-a4c2-9f27272f57de-logs\") pod \"8c1ff060-6576-403c-a4c2-9f27272f57de\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.057735 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nctj\" (UniqueName: \"kubernetes.io/projected/8c1ff060-6576-403c-a4c2-9f27272f57de-kube-api-access-2nctj\") pod \"8c1ff060-6576-403c-a4c2-9f27272f57de\" (UID: \"8c1ff060-6576-403c-a4c2-9f27272f57de\") " Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.058304 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c1ff060-6576-403c-a4c2-9f27272f57de-logs" (OuterVolumeSpecName: "logs") pod "8c1ff060-6576-403c-a4c2-9f27272f57de" (UID: "8c1ff060-6576-403c-a4c2-9f27272f57de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.068008 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1ff060-6576-403c-a4c2-9f27272f57de-kube-api-access-2nctj" (OuterVolumeSpecName: "kube-api-access-2nctj") pod "8c1ff060-6576-403c-a4c2-9f27272f57de" (UID: "8c1ff060-6576-403c-a4c2-9f27272f57de"). InnerVolumeSpecName "kube-api-access-2nctj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.085318 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1ff060-6576-403c-a4c2-9f27272f57de-config-data" (OuterVolumeSpecName: "config-data") pod "8c1ff060-6576-403c-a4c2-9f27272f57de" (UID: "8c1ff060-6576-403c-a4c2-9f27272f57de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.099302 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1ff060-6576-403c-a4c2-9f27272f57de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c1ff060-6576-403c-a4c2-9f27272f57de" (UID: "8c1ff060-6576-403c-a4c2-9f27272f57de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.159674 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1ff060-6576-403c-a4c2-9f27272f57de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.159706 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c1ff060-6576-403c-a4c2-9f27272f57de-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.159720 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nctj\" (UniqueName: \"kubernetes.io/projected/8c1ff060-6576-403c-a4c2-9f27272f57de-kube-api-access-2nctj\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.159735 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c1ff060-6576-403c-a4c2-9f27272f57de-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.527401 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.537783 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.539032 4802 generic.go:334] "Generic (PLEG): container finished" podID="8c1ff060-6576-403c-a4c2-9f27272f57de" containerID="6f4b0891eb079ead4f826090f55c619335b5d34b85aa2219b8b8c5faa5bf55aa" exitCode=0 Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.539102 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c1ff060-6576-403c-a4c2-9f27272f57de","Type":"ContainerDied","Data":"6f4b0891eb079ead4f826090f55c619335b5d34b85aa2219b8b8c5faa5bf55aa"} Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.539127 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c1ff060-6576-403c-a4c2-9f27272f57de","Type":"ContainerDied","Data":"06e2f4099512841f0191e78c4e407870e73d033335900e14e17c28a78fb47116"} Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.539144 4802 scope.go:117] "RemoveContainer" containerID="6f4b0891eb079ead4f826090f55c619335b5d34b85aa2219b8b8c5faa5bf55aa" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.539256 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.590758 4802 scope.go:117] "RemoveContainer" containerID="ec66d4445e3100bae26269c807feb66a0af862d1a0f0588f194b765642bb3906" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.595824 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.609555 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.620480 4802 scope.go:117] "RemoveContainer" containerID="6f4b0891eb079ead4f826090f55c619335b5d34b85aa2219b8b8c5faa5bf55aa" Oct 04 05:09:29 crc kubenswrapper[4802]: E1004 05:09:29.620832 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f4b0891eb079ead4f826090f55c619335b5d34b85aa2219b8b8c5faa5bf55aa\": container with ID starting with 6f4b0891eb079ead4f826090f55c619335b5d34b85aa2219b8b8c5faa5bf55aa not found: ID does not exist" containerID="6f4b0891eb079ead4f826090f55c619335b5d34b85aa2219b8b8c5faa5bf55aa" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.620862 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f4b0891eb079ead4f826090f55c619335b5d34b85aa2219b8b8c5faa5bf55aa"} err="failed to get container status \"6f4b0891eb079ead4f826090f55c619335b5d34b85aa2219b8b8c5faa5bf55aa\": rpc error: code = NotFound desc = could not find container \"6f4b0891eb079ead4f826090f55c619335b5d34b85aa2219b8b8c5faa5bf55aa\": container with ID starting with 6f4b0891eb079ead4f826090f55c619335b5d34b85aa2219b8b8c5faa5bf55aa not found: ID does not exist" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.620882 4802 scope.go:117] "RemoveContainer" containerID="ec66d4445e3100bae26269c807feb66a0af862d1a0f0588f194b765642bb3906" Oct 04 05:09:29 crc kubenswrapper[4802]: E1004 05:09:29.621144 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec66d4445e3100bae26269c807feb66a0af862d1a0f0588f194b765642bb3906\": container with ID starting with ec66d4445e3100bae26269c807feb66a0af862d1a0f0588f194b765642bb3906 not found: ID does not exist" containerID="ec66d4445e3100bae26269c807feb66a0af862d1a0f0588f194b765642bb3906" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.621168 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec66d4445e3100bae26269c807feb66a0af862d1a0f0588f194b765642bb3906"} err="failed to get container status \"ec66d4445e3100bae26269c807feb66a0af862d1a0f0588f194b765642bb3906\": rpc error: code = NotFound desc = could not find container \"ec66d4445e3100bae26269c807feb66a0af862d1a0f0588f194b765642bb3906\": container with ID starting with ec66d4445e3100bae26269c807feb66a0af862d1a0f0588f194b765642bb3906 not found: ID does not exist" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.626554 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:29 crc kubenswrapper[4802]: E1004 05:09:29.627784 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1ff060-6576-403c-a4c2-9f27272f57de" containerName="nova-api-log" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.628019 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1ff060-6576-403c-a4c2-9f27272f57de" containerName="nova-api-log" Oct 04 05:09:29 crc kubenswrapper[4802]: E1004 05:09:29.628152 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1ff060-6576-403c-a4c2-9f27272f57de" containerName="nova-api-api" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.628570 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1ff060-6576-403c-a4c2-9f27272f57de" containerName="nova-api-api" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.629561 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1ff060-6576-403c-a4c2-9f27272f57de" containerName="nova-api-log" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.629728 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1ff060-6576-403c-a4c2-9f27272f57de" containerName="nova-api-api" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.630993 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.633791 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.633989 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.634510 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.662103 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.668506 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.668582 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be613e7-d8a4-4e1f-ab7f-296598b3832c-logs\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.668656 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjx6\" (UniqueName: \"kubernetes.io/projected/9be613e7-d8a4-4e1f-ab7f-296598b3832c-kube-api-access-jrjx6\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.668706 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-public-tls-certs\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.668768 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-config-data\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.668793 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.770915 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-public-tls-certs\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.771021 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-config-data\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.771062 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.771143 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.771177 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be613e7-d8a4-4e1f-ab7f-296598b3832c-logs\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.771222 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrjx6\" (UniqueName: \"kubernetes.io/projected/9be613e7-d8a4-4e1f-ab7f-296598b3832c-kube-api-access-jrjx6\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.772337 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be613e7-d8a4-4e1f-ab7f-296598b3832c-logs\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.780106 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-config-data\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.780335 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.780674 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.780779 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-public-tls-certs\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.793514 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrjx6\" (UniqueName: \"kubernetes.io/projected/9be613e7-d8a4-4e1f-ab7f-296598b3832c-kube-api-access-jrjx6\") pod \"nova-api-0\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " pod="openstack/nova-api-0" Oct 04 05:09:29 crc kubenswrapper[4802]: I1004 05:09:29.975431 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:09:30 crc kubenswrapper[4802]: I1004 05:09:30.372030 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f74aeeb-8b92-4e1f-8588-a41ee53f7259" path="/var/lib/kubelet/pods/5f74aeeb-8b92-4e1f-8588-a41ee53f7259/volumes" Oct 04 05:09:30 crc kubenswrapper[4802]: I1004 05:09:30.373152 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1ff060-6576-403c-a4c2-9f27272f57de" path="/var/lib/kubelet/pods/8c1ff060-6576-403c-a4c2-9f27272f57de/volumes" Oct 04 05:09:30 crc kubenswrapper[4802]: I1004 05:09:30.434286 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:30 crc kubenswrapper[4802]: I1004 05:09:30.555244 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edd0a556-bfd5-46dc-aa86-63cfc060baf6","Type":"ContainerStarted","Data":"af3b855928a572db5be9839bf956bf00b1cb6fc8668a0b8b0b6234eeb1f2a46b"} Oct 04 05:09:30 crc kubenswrapper[4802]: I1004 05:09:30.555623 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edd0a556-bfd5-46dc-aa86-63cfc060baf6","Type":"ContainerStarted","Data":"4bca9117e9cda43d8f9f60e9393cd67b99ecf8680a1914b56ca0ef1ef4701a35"} Oct 04 05:09:30 crc kubenswrapper[4802]: I1004 05:09:30.556550 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9be613e7-d8a4-4e1f-ab7f-296598b3832c","Type":"ContainerStarted","Data":"3726faafa10c8995b4679ffc70113336b3c9a0c5f9b93760d8c76add545304be"} Oct 04 05:09:30 crc kubenswrapper[4802]: I1004 05:09:30.793870 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:30 crc kubenswrapper[4802]: I1004 05:09:30.816695 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.571127 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edd0a556-bfd5-46dc-aa86-63cfc060baf6","Type":"ContainerStarted","Data":"cd0466efba574b7a9f193bcd50e411be3b3aeb77525ab908b60a2ae2db0287ac"} Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.572334 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edd0a556-bfd5-46dc-aa86-63cfc060baf6","Type":"ContainerStarted","Data":"84082deb857cde95143808e65f3fcda348947c0c62247da740c9204206352a2f"} Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.576526 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9be613e7-d8a4-4e1f-ab7f-296598b3832c","Type":"ContainerStarted","Data":"b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8"} Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.576770 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9be613e7-d8a4-4e1f-ab7f-296598b3832c","Type":"ContainerStarted","Data":"973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0"} Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.592698 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.605380 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.60536218 podStartE2EDuration="2.60536218s" podCreationTimestamp="2025-10-04 05:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:09:31.598505784 +0000 UTC m=+1414.006506429" watchObservedRunningTime="2025-10-04 05:09:31.60536218 +0000 UTC m=+1414.013362805" Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.792594 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5xdjz"] Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.794030 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.796246 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.796257 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.802883 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5xdjz"] Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.916073 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-scripts\") pod \"nova-cell1-cell-mapping-5xdjz\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.916142 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-729wm\" (UniqueName: \"kubernetes.io/projected/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-kube-api-access-729wm\") pod \"nova-cell1-cell-mapping-5xdjz\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.916218 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-config-data\") pod \"nova-cell1-cell-mapping-5xdjz\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:31 crc kubenswrapper[4802]: I1004 05:09:31.916511 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5xdjz\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:32 crc kubenswrapper[4802]: I1004 05:09:32.017932 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5xdjz\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:32 crc kubenswrapper[4802]: I1004 05:09:32.018282 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-scripts\") pod \"nova-cell1-cell-mapping-5xdjz\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:32 crc kubenswrapper[4802]: I1004 05:09:32.018326 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-729wm\" (UniqueName: \"kubernetes.io/projected/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-kube-api-access-729wm\") pod \"nova-cell1-cell-mapping-5xdjz\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:32 crc kubenswrapper[4802]: I1004 05:09:32.018377 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-config-data\") pod \"nova-cell1-cell-mapping-5xdjz\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:32 crc kubenswrapper[4802]: I1004 05:09:32.025777 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-scripts\") pod \"nova-cell1-cell-mapping-5xdjz\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:32 crc kubenswrapper[4802]: I1004 05:09:32.025882 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5xdjz\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:32 crc kubenswrapper[4802]: I1004 05:09:32.028044 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-config-data\") pod \"nova-cell1-cell-mapping-5xdjz\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:32 crc kubenswrapper[4802]: I1004 05:09:32.049315 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-729wm\" (UniqueName: \"kubernetes.io/projected/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-kube-api-access-729wm\") pod \"nova-cell1-cell-mapping-5xdjz\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:32 crc kubenswrapper[4802]: I1004 05:09:32.114981 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:32 crc kubenswrapper[4802]: E1004 05:09:32.427009 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9896243_f600_4461_ac5c_e22070c86c51.slice/crio-54f9088880a2568e7025f04c21ef446a7efacc652ae79fc5ed3c6e03355ce9c3\": RecentStats: unable to find data in memory cache]" Oct 04 05:09:32 crc kubenswrapper[4802]: I1004 05:09:32.578971 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5xdjz"] Oct 04 05:09:32 crc kubenswrapper[4802]: I1004 05:09:32.591141 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5xdjz" event={"ID":"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045","Type":"ContainerStarted","Data":"7f0b836ef21d7b7c6773c869819100a1d66927efaceaf7446a9296e4e2523027"} Oct 04 05:09:33 crc kubenswrapper[4802]: I1004 05:09:33.221851 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:09:33 crc kubenswrapper[4802]: I1004 05:09:33.282292 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-8gbmv"] Oct 04 05:09:33 crc kubenswrapper[4802]: I1004 05:09:33.282564 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" podUID="6c4b637c-9981-4c46-a657-16e2d39e0b31" containerName="dnsmasq-dns" containerID="cri-o://c6591bf772d2b7ddef20c7d7bf1de4e34de29238c7121d2c23bca920b7683961" gracePeriod=10 Oct 04 05:09:33 crc kubenswrapper[4802]: I1004 05:09:33.604086 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5xdjz" event={"ID":"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045","Type":"ContainerStarted","Data":"ba5d03f70e00c76663d3f1b77d49a4fb2a5bf5543bcf1adcdf8b42b923f86216"} Oct 04 05:09:33 crc kubenswrapper[4802]: I1004 05:09:33.608515 4802 generic.go:334] "Generic (PLEG): container finished" podID="6c4b637c-9981-4c46-a657-16e2d39e0b31" containerID="c6591bf772d2b7ddef20c7d7bf1de4e34de29238c7121d2c23bca920b7683961" exitCode=0 Oct 04 05:09:33 crc kubenswrapper[4802]: I1004 05:09:33.608585 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" event={"ID":"6c4b637c-9981-4c46-a657-16e2d39e0b31","Type":"ContainerDied","Data":"c6591bf772d2b7ddef20c7d7bf1de4e34de29238c7121d2c23bca920b7683961"} Oct 04 05:09:33 crc kubenswrapper[4802]: I1004 05:09:33.611537 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edd0a556-bfd5-46dc-aa86-63cfc060baf6","Type":"ContainerStarted","Data":"6aa8a94b874f87a8ea185034fbc2734075835a6ea8619d15aadd543926e2b51f"} Oct 04 05:09:33 crc kubenswrapper[4802]: I1004 05:09:33.611773 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 04 05:09:33 crc kubenswrapper[4802]: I1004 05:09:33.624340 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5xdjz" podStartSLOduration=2.624323259 podStartE2EDuration="2.624323259s" podCreationTimestamp="2025-10-04 05:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:09:33.619283646 +0000 UTC m=+1416.027284271" watchObservedRunningTime="2025-10-04 05:09:33.624323259 +0000 UTC m=+1416.032323884" Oct 04 05:09:33 crc kubenswrapper[4802]: I1004 05:09:33.642471 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.452125249 podStartE2EDuration="5.642453915s" podCreationTimestamp="2025-10-04 05:09:28 +0000 UTC" firstStartedPulling="2025-10-04 05:09:29.537498527 +0000 UTC m=+1411.945499152" lastFinishedPulling="2025-10-04 05:09:32.727827183 +0000 UTC m=+1415.135827818" observedRunningTime="2025-10-04 05:09:33.639629945 +0000 UTC m=+1416.047630580" watchObservedRunningTime="2025-10-04 05:09:33.642453915 +0000 UTC m=+1416.050454540" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.320967 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.468139 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t86hx\" (UniqueName: \"kubernetes.io/projected/6c4b637c-9981-4c46-a657-16e2d39e0b31-kube-api-access-t86hx\") pod \"6c4b637c-9981-4c46-a657-16e2d39e0b31\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.468427 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-dns-svc\") pod \"6c4b637c-9981-4c46-a657-16e2d39e0b31\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.468719 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-config\") pod \"6c4b637c-9981-4c46-a657-16e2d39e0b31\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.468853 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-ovsdbserver-sb\") pod \"6c4b637c-9981-4c46-a657-16e2d39e0b31\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.469038 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-ovsdbserver-nb\") pod \"6c4b637c-9981-4c46-a657-16e2d39e0b31\" (UID: \"6c4b637c-9981-4c46-a657-16e2d39e0b31\") " Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.484883 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4b637c-9981-4c46-a657-16e2d39e0b31-kube-api-access-t86hx" (OuterVolumeSpecName: "kube-api-access-t86hx") pod "6c4b637c-9981-4c46-a657-16e2d39e0b31" (UID: "6c4b637c-9981-4c46-a657-16e2d39e0b31"). InnerVolumeSpecName "kube-api-access-t86hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.522092 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c4b637c-9981-4c46-a657-16e2d39e0b31" (UID: "6c4b637c-9981-4c46-a657-16e2d39e0b31"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.522616 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-config" (OuterVolumeSpecName: "config") pod "6c4b637c-9981-4c46-a657-16e2d39e0b31" (UID: "6c4b637c-9981-4c46-a657-16e2d39e0b31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.526297 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c4b637c-9981-4c46-a657-16e2d39e0b31" (UID: "6c4b637c-9981-4c46-a657-16e2d39e0b31"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.527991 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c4b637c-9981-4c46-a657-16e2d39e0b31" (UID: "6c4b637c-9981-4c46-a657-16e2d39e0b31"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.579031 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t86hx\" (UniqueName: \"kubernetes.io/projected/6c4b637c-9981-4c46-a657-16e2d39e0b31-kube-api-access-t86hx\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.579060 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.579069 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.579078 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.579086 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c4b637c-9981-4c46-a657-16e2d39e0b31-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.650112 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.651154 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-8gbmv" event={"ID":"6c4b637c-9981-4c46-a657-16e2d39e0b31","Type":"ContainerDied","Data":"8e0ab1d64cb9820d9de34feb579089c5090a810159b7db745eb07413563a5237"} Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.651229 4802 scope.go:117] "RemoveContainer" containerID="c6591bf772d2b7ddef20c7d7bf1de4e34de29238c7121d2c23bca920b7683961" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.699481 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-8gbmv"] Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.700074 4802 scope.go:117] "RemoveContainer" containerID="4a1b96c08d0ed5e4c77c79cd8c61b5d0bff89d2899aa07dad7304f4579d67997" Oct 04 05:09:34 crc kubenswrapper[4802]: I1004 05:09:34.711501 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-8gbmv"] Oct 04 05:09:36 crc kubenswrapper[4802]: I1004 05:09:36.370234 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4b637c-9981-4c46-a657-16e2d39e0b31" path="/var/lib/kubelet/pods/6c4b637c-9981-4c46-a657-16e2d39e0b31/volumes" Oct 04 05:09:37 crc kubenswrapper[4802]: I1004 05:09:37.681326 4802 generic.go:334] "Generic (PLEG): container finished" podID="fcbc1c5a-044d-4c7a-a7b4-79e0da35c045" containerID="ba5d03f70e00c76663d3f1b77d49a4fb2a5bf5543bcf1adcdf8b42b923f86216" exitCode=0 Oct 04 05:09:37 crc kubenswrapper[4802]: I1004 05:09:37.681680 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5xdjz" event={"ID":"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045","Type":"ContainerDied","Data":"ba5d03f70e00c76663d3f1b77d49a4fb2a5bf5543bcf1adcdf8b42b923f86216"} Oct 04 05:09:38 crc kubenswrapper[4802]: I1004 05:09:38.996868 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.071140 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-scripts\") pod \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.071267 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-config-data\") pod \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.071384 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-729wm\" (UniqueName: \"kubernetes.io/projected/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-kube-api-access-729wm\") pod \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.071413 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-combined-ca-bundle\") pod \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\" (UID: \"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045\") " Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.077004 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-kube-api-access-729wm" (OuterVolumeSpecName: "kube-api-access-729wm") pod "fcbc1c5a-044d-4c7a-a7b4-79e0da35c045" (UID: "fcbc1c5a-044d-4c7a-a7b4-79e0da35c045"). InnerVolumeSpecName "kube-api-access-729wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.077080 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-scripts" (OuterVolumeSpecName: "scripts") pod "fcbc1c5a-044d-4c7a-a7b4-79e0da35c045" (UID: "fcbc1c5a-044d-4c7a-a7b4-79e0da35c045"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.100809 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcbc1c5a-044d-4c7a-a7b4-79e0da35c045" (UID: "fcbc1c5a-044d-4c7a-a7b4-79e0da35c045"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.101226 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-config-data" (OuterVolumeSpecName: "config-data") pod "fcbc1c5a-044d-4c7a-a7b4-79e0da35c045" (UID: "fcbc1c5a-044d-4c7a-a7b4-79e0da35c045"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.173683 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.173718 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.173733 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-729wm\" (UniqueName: \"kubernetes.io/projected/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-kube-api-access-729wm\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.173747 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.718983 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5xdjz" event={"ID":"fcbc1c5a-044d-4c7a-a7b4-79e0da35c045","Type":"ContainerDied","Data":"7f0b836ef21d7b7c6773c869819100a1d66927efaceaf7446a9296e4e2523027"} Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.719326 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f0b836ef21d7b7c6773c869819100a1d66927efaceaf7446a9296e4e2523027" Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.719023 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5xdjz" Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.879300 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.879529 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6866082e-f58d-4942-b27e-7f13543dbcd3" containerName="nova-scheduler-scheduler" containerID="cri-o://f08ff1d3f35180746849f4d44fbcfc0452d3c49d71114466f7f686da527589fc" gracePeriod=30 Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.892570 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.893371 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9be613e7-d8a4-4e1f-ab7f-296598b3832c" containerName="nova-api-log" containerID="cri-o://973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0" gracePeriod=30 Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.893466 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9be613e7-d8a4-4e1f-ab7f-296598b3832c" containerName="nova-api-api" containerID="cri-o://b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8" gracePeriod=30 Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.904981 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.905213 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerName="nova-metadata-log" containerID="cri-o://ea1b1158cfa0c7afb65c540faf97bcf7b66e23e640b71305665f78abbbce3a93" gracePeriod=30 Oct 04 05:09:39 crc kubenswrapper[4802]: I1004 05:09:39.905408 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerName="nova-metadata-metadata" containerID="cri-o://ba022423de39b0350bd5873933471215781db1a1e83a712b6612e73d58420b1e" gracePeriod=30 Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.512926 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.606753 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-public-tls-certs\") pod \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.606818 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-config-data\") pod \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.607427 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrjx6\" (UniqueName: \"kubernetes.io/projected/9be613e7-d8a4-4e1f-ab7f-296598b3832c-kube-api-access-jrjx6\") pod \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.607536 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-combined-ca-bundle\") pod \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.607569 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be613e7-d8a4-4e1f-ab7f-296598b3832c-logs\") pod \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.607632 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-internal-tls-certs\") pod \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\" (UID: \"9be613e7-d8a4-4e1f-ab7f-296598b3832c\") " Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.607868 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9be613e7-d8a4-4e1f-ab7f-296598b3832c-logs" (OuterVolumeSpecName: "logs") pod "9be613e7-d8a4-4e1f-ab7f-296598b3832c" (UID: "9be613e7-d8a4-4e1f-ab7f-296598b3832c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.608136 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be613e7-d8a4-4e1f-ab7f-296598b3832c-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.611351 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be613e7-d8a4-4e1f-ab7f-296598b3832c-kube-api-access-jrjx6" (OuterVolumeSpecName: "kube-api-access-jrjx6") pod "9be613e7-d8a4-4e1f-ab7f-296598b3832c" (UID: "9be613e7-d8a4-4e1f-ab7f-296598b3832c"). InnerVolumeSpecName "kube-api-access-jrjx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.635535 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9be613e7-d8a4-4e1f-ab7f-296598b3832c" (UID: "9be613e7-d8a4-4e1f-ab7f-296598b3832c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.635688 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-config-data" (OuterVolumeSpecName: "config-data") pod "9be613e7-d8a4-4e1f-ab7f-296598b3832c" (UID: "9be613e7-d8a4-4e1f-ab7f-296598b3832c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.658033 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9be613e7-d8a4-4e1f-ab7f-296598b3832c" (UID: "9be613e7-d8a4-4e1f-ab7f-296598b3832c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.679752 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9be613e7-d8a4-4e1f-ab7f-296598b3832c" (UID: "9be613e7-d8a4-4e1f-ab7f-296598b3832c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.710095 4802 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.710133 4802 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.710144 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.710157 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrjx6\" (UniqueName: \"kubernetes.io/projected/9be613e7-d8a4-4e1f-ab7f-296598b3832c-kube-api-access-jrjx6\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.710172 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be613e7-d8a4-4e1f-ab7f-296598b3832c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.728721 4802 generic.go:334] "Generic (PLEG): container finished" podID="9be613e7-d8a4-4e1f-ab7f-296598b3832c" containerID="b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8" exitCode=0 Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.728755 4802 generic.go:334] "Generic (PLEG): container finished" podID="9be613e7-d8a4-4e1f-ab7f-296598b3832c" containerID="973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0" exitCode=143 Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.728778 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9be613e7-d8a4-4e1f-ab7f-296598b3832c","Type":"ContainerDied","Data":"b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8"} Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.728818 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9be613e7-d8a4-4e1f-ab7f-296598b3832c","Type":"ContainerDied","Data":"973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0"} Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.728786 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.728837 4802 scope.go:117] "RemoveContainer" containerID="b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.728827 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9be613e7-d8a4-4e1f-ab7f-296598b3832c","Type":"ContainerDied","Data":"3726faafa10c8995b4679ffc70113336b3c9a0c5f9b93760d8c76add545304be"} Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.730468 4802 generic.go:334] "Generic (PLEG): container finished" podID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerID="ea1b1158cfa0c7afb65c540faf97bcf7b66e23e640b71305665f78abbbce3a93" exitCode=143 Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.730499 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0580afa6-ca5b-412b-98cb-734acd556bb8","Type":"ContainerDied","Data":"ea1b1158cfa0c7afb65c540faf97bcf7b66e23e640b71305665f78abbbce3a93"} Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.762777 4802 scope.go:117] "RemoveContainer" containerID="973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.769349 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.777681 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.797300 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:40 crc kubenswrapper[4802]: E1004 05:09:40.797727 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be613e7-d8a4-4e1f-ab7f-296598b3832c" containerName="nova-api-api" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.797745 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be613e7-d8a4-4e1f-ab7f-296598b3832c" containerName="nova-api-api" Oct 04 05:09:40 crc kubenswrapper[4802]: E1004 05:09:40.797791 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be613e7-d8a4-4e1f-ab7f-296598b3832c" containerName="nova-api-log" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.797798 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be613e7-d8a4-4e1f-ab7f-296598b3832c" containerName="nova-api-log" Oct 04 05:09:40 crc kubenswrapper[4802]: E1004 05:09:40.797817 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4b637c-9981-4c46-a657-16e2d39e0b31" containerName="dnsmasq-dns" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.797822 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4b637c-9981-4c46-a657-16e2d39e0b31" containerName="dnsmasq-dns" Oct 04 05:09:40 crc kubenswrapper[4802]: E1004 05:09:40.797843 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4b637c-9981-4c46-a657-16e2d39e0b31" containerName="init" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.797859 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4b637c-9981-4c46-a657-16e2d39e0b31" containerName="init" Oct 04 05:09:40 crc kubenswrapper[4802]: E1004 05:09:40.797871 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcbc1c5a-044d-4c7a-a7b4-79e0da35c045" containerName="nova-manage" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.797877 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcbc1c5a-044d-4c7a-a7b4-79e0da35c045" containerName="nova-manage" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.798040 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4b637c-9981-4c46-a657-16e2d39e0b31" containerName="dnsmasq-dns" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.798059 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcbc1c5a-044d-4c7a-a7b4-79e0da35c045" containerName="nova-manage" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.798068 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be613e7-d8a4-4e1f-ab7f-296598b3832c" containerName="nova-api-api" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.798078 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be613e7-d8a4-4e1f-ab7f-296598b3832c" containerName="nova-api-log" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.799042 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.802198 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.802476 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.802660 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.810025 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.812354 4802 scope.go:117] "RemoveContainer" containerID="b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8" Oct 04 05:09:40 crc kubenswrapper[4802]: E1004 05:09:40.813154 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8\": container with ID starting with b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8 not found: ID does not exist" containerID="b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.813308 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8"} err="failed to get container status \"b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8\": rpc error: code = NotFound desc = could not find container \"b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8\": container with ID starting with b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8 not found: ID does not exist" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.813431 4802 scope.go:117] "RemoveContainer" containerID="973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0" Oct 04 05:09:40 crc kubenswrapper[4802]: E1004 05:09:40.815480 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0\": container with ID starting with 973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0 not found: ID does not exist" containerID="973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.815537 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0"} err="failed to get container status \"973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0\": rpc error: code = NotFound desc = could not find container \"973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0\": container with ID starting with 973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0 not found: ID does not exist" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.815567 4802 scope.go:117] "RemoveContainer" containerID="b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.816762 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8"} err="failed to get container status \"b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8\": rpc error: code = NotFound desc = could not find container \"b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8\": container with ID starting with b2a3fba3f6dea43be379a21f5162278609516630dd24ec67a5d07074169f70f8 not found: ID does not exist" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.816790 4802 scope.go:117] "RemoveContainer" containerID="973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.817383 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0"} err="failed to get container status \"973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0\": rpc error: code = NotFound desc = could not find container \"973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0\": container with ID starting with 973b8a67c57345b44099dd1a694eefc1ebbbd300da2b4990671d27882f1fc6a0 not found: ID does not exist" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.912606 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.912820 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-config-data\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.912899 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmwvt\" (UniqueName: \"kubernetes.io/projected/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-kube-api-access-hmwvt\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.913020 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-logs\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.913128 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:40 crc kubenswrapper[4802]: I1004 05:09:40.913158 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.015079 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.015169 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-config-data\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.015206 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmwvt\" (UniqueName: \"kubernetes.io/projected/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-kube-api-access-hmwvt\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.015240 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-logs\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.015285 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.015302 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.017670 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-logs\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.019910 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.024080 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.024415 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.024543 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-config-data\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.034780 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmwvt\" (UniqueName: \"kubernetes.io/projected/3ecc5f0f-85cb-4fc7-b243-d81502fd473d-kube-api-access-hmwvt\") pod \"nova-api-0\" (UID: \"3ecc5f0f-85cb-4fc7-b243-d81502fd473d\") " pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.129001 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.551796 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.689527 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.727314 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6866082e-f58d-4942-b27e-7f13543dbcd3-config-data\") pod \"6866082e-f58d-4942-b27e-7f13543dbcd3\" (UID: \"6866082e-f58d-4942-b27e-7f13543dbcd3\") " Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.727385 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6866082e-f58d-4942-b27e-7f13543dbcd3-combined-ca-bundle\") pod \"6866082e-f58d-4942-b27e-7f13543dbcd3\" (UID: \"6866082e-f58d-4942-b27e-7f13543dbcd3\") " Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.727501 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5qxp\" (UniqueName: \"kubernetes.io/projected/6866082e-f58d-4942-b27e-7f13543dbcd3-kube-api-access-f5qxp\") pod \"6866082e-f58d-4942-b27e-7f13543dbcd3\" (UID: \"6866082e-f58d-4942-b27e-7f13543dbcd3\") " Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.737068 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6866082e-f58d-4942-b27e-7f13543dbcd3-kube-api-access-f5qxp" (OuterVolumeSpecName: "kube-api-access-f5qxp") pod "6866082e-f58d-4942-b27e-7f13543dbcd3" (UID: "6866082e-f58d-4942-b27e-7f13543dbcd3"). InnerVolumeSpecName "kube-api-access-f5qxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.749096 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ecc5f0f-85cb-4fc7-b243-d81502fd473d","Type":"ContainerStarted","Data":"5c7b9068464a74233dd1062286ddfaa6cbc0c0dfa8ec29f17f2719b9ad37bb3f"} Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.753048 4802 generic.go:334] "Generic (PLEG): container finished" podID="6866082e-f58d-4942-b27e-7f13543dbcd3" containerID="f08ff1d3f35180746849f4d44fbcfc0452d3c49d71114466f7f686da527589fc" exitCode=0 Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.753114 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6866082e-f58d-4942-b27e-7f13543dbcd3","Type":"ContainerDied","Data":"f08ff1d3f35180746849f4d44fbcfc0452d3c49d71114466f7f686da527589fc"} Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.753143 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6866082e-f58d-4942-b27e-7f13543dbcd3","Type":"ContainerDied","Data":"d32f6df112ac8edbdba29eb282e47181e5e40b9a4159f5fbfbbb6261a573f006"} Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.753163 4802 scope.go:117] "RemoveContainer" containerID="f08ff1d3f35180746849f4d44fbcfc0452d3c49d71114466f7f686da527589fc" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.753287 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.761567 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6866082e-f58d-4942-b27e-7f13543dbcd3-config-data" (OuterVolumeSpecName: "config-data") pod "6866082e-f58d-4942-b27e-7f13543dbcd3" (UID: "6866082e-f58d-4942-b27e-7f13543dbcd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.781018 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6866082e-f58d-4942-b27e-7f13543dbcd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6866082e-f58d-4942-b27e-7f13543dbcd3" (UID: "6866082e-f58d-4942-b27e-7f13543dbcd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.790409 4802 scope.go:117] "RemoveContainer" containerID="f08ff1d3f35180746849f4d44fbcfc0452d3c49d71114466f7f686da527589fc" Oct 04 05:09:41 crc kubenswrapper[4802]: E1004 05:09:41.791234 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f08ff1d3f35180746849f4d44fbcfc0452d3c49d71114466f7f686da527589fc\": container with ID starting with f08ff1d3f35180746849f4d44fbcfc0452d3c49d71114466f7f686da527589fc not found: ID does not exist" containerID="f08ff1d3f35180746849f4d44fbcfc0452d3c49d71114466f7f686da527589fc" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.791328 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f08ff1d3f35180746849f4d44fbcfc0452d3c49d71114466f7f686da527589fc"} err="failed to get container status \"f08ff1d3f35180746849f4d44fbcfc0452d3c49d71114466f7f686da527589fc\": rpc error: code = NotFound desc = could not find container \"f08ff1d3f35180746849f4d44fbcfc0452d3c49d71114466f7f686da527589fc\": container with ID starting with f08ff1d3f35180746849f4d44fbcfc0452d3c49d71114466f7f686da527589fc not found: ID does not exist" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.829719 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6866082e-f58d-4942-b27e-7f13543dbcd3-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.829756 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6866082e-f58d-4942-b27e-7f13543dbcd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:41 crc kubenswrapper[4802]: I1004 05:09:41.829771 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5qxp\" (UniqueName: \"kubernetes.io/projected/6866082e-f58d-4942-b27e-7f13543dbcd3-kube-api-access-f5qxp\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.084878 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.093310 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.111544 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:09:42 crc kubenswrapper[4802]: E1004 05:09:42.112044 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6866082e-f58d-4942-b27e-7f13543dbcd3" containerName="nova-scheduler-scheduler" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.112068 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="6866082e-f58d-4942-b27e-7f13543dbcd3" containerName="nova-scheduler-scheduler" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.112287 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="6866082e-f58d-4942-b27e-7f13543dbcd3" containerName="nova-scheduler-scheduler" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.113105 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.115430 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.118856 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.136877 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf7828c-edbb-4a9c-aadf-f52ecea9097e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"baf7828c-edbb-4a9c-aadf-f52ecea9097e\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.136933 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf7828c-edbb-4a9c-aadf-f52ecea9097e-config-data\") pod \"nova-scheduler-0\" (UID: \"baf7828c-edbb-4a9c-aadf-f52ecea9097e\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.137025 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp8s9\" (UniqueName: \"kubernetes.io/projected/baf7828c-edbb-4a9c-aadf-f52ecea9097e-kube-api-access-mp8s9\") pod \"nova-scheduler-0\" (UID: \"baf7828c-edbb-4a9c-aadf-f52ecea9097e\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.238680 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf7828c-edbb-4a9c-aadf-f52ecea9097e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"baf7828c-edbb-4a9c-aadf-f52ecea9097e\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.238984 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf7828c-edbb-4a9c-aadf-f52ecea9097e-config-data\") pod \"nova-scheduler-0\" (UID: \"baf7828c-edbb-4a9c-aadf-f52ecea9097e\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.239074 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp8s9\" (UniqueName: \"kubernetes.io/projected/baf7828c-edbb-4a9c-aadf-f52ecea9097e-kube-api-access-mp8s9\") pod \"nova-scheduler-0\" (UID: \"baf7828c-edbb-4a9c-aadf-f52ecea9097e\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.242707 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf7828c-edbb-4a9c-aadf-f52ecea9097e-config-data\") pod \"nova-scheduler-0\" (UID: \"baf7828c-edbb-4a9c-aadf-f52ecea9097e\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.247230 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf7828c-edbb-4a9c-aadf-f52ecea9097e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"baf7828c-edbb-4a9c-aadf-f52ecea9097e\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.262231 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp8s9\" (UniqueName: \"kubernetes.io/projected/baf7828c-edbb-4a9c-aadf-f52ecea9097e-kube-api-access-mp8s9\") pod \"nova-scheduler-0\" (UID: \"baf7828c-edbb-4a9c-aadf-f52ecea9097e\") " pod="openstack/nova-scheduler-0" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.370455 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6866082e-f58d-4942-b27e-7f13543dbcd3" path="/var/lib/kubelet/pods/6866082e-f58d-4942-b27e-7f13543dbcd3/volumes" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.371111 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be613e7-d8a4-4e1f-ab7f-296598b3832c" path="/var/lib/kubelet/pods/9be613e7-d8a4-4e1f-ab7f-296598b3832c/volumes" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.431988 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 04 05:09:42 crc kubenswrapper[4802]: E1004 05:09:42.663867 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9896243_f600_4461_ac5c_e22070c86c51.slice/crio-54f9088880a2568e7025f04c21ef446a7efacc652ae79fc5ed3c6e03355ce9c3\": RecentStats: unable to find data in memory cache]" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.771714 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ecc5f0f-85cb-4fc7-b243-d81502fd473d","Type":"ContainerStarted","Data":"3e5b21032c0ae62813fcbf145f1aae0181a620f46d405bf652fce145a157ed32"} Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.771797 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ecc5f0f-85cb-4fc7-b243-d81502fd473d","Type":"ContainerStarted","Data":"832410c17d8eea946f93f4377d953ef775f53bf1578724322b5f19f4beb20122"} Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.795084 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.795067612 podStartE2EDuration="2.795067612s" podCreationTimestamp="2025-10-04 05:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:09:42.79395458 +0000 UTC m=+1425.201955205" watchObservedRunningTime="2025-10-04 05:09:42.795067612 +0000 UTC m=+1425.203068237" Oct 04 05:09:42 crc kubenswrapper[4802]: I1004 05:09:42.872182 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 04 05:09:42 crc kubenswrapper[4802]: W1004 05:09:42.877575 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaf7828c_edbb_4a9c_aadf_f52ecea9097e.slice/crio-d452d3033b759f1bc98460b2f6cef473d1a65677786616f262a0d8f24476cea7 WatchSource:0}: Error finding container d452d3033b759f1bc98460b2f6cef473d1a65677786616f262a0d8f24476cea7: Status 404 returned error can't find the container with id d452d3033b759f1bc98460b2f6cef473d1a65677786616f262a0d8f24476cea7 Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.048921 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:44164->10.217.0.178:8775: read: connection reset by peer" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.048988 4802 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:44166->10.217.0.178:8775: read: connection reset by peer" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.457201 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.564375 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-combined-ca-bundle\") pod \"0580afa6-ca5b-412b-98cb-734acd556bb8\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.564420 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-config-data\") pod \"0580afa6-ca5b-412b-98cb-734acd556bb8\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.564449 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-nova-metadata-tls-certs\") pod \"0580afa6-ca5b-412b-98cb-734acd556bb8\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.564540 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2g44\" (UniqueName: \"kubernetes.io/projected/0580afa6-ca5b-412b-98cb-734acd556bb8-kube-api-access-n2g44\") pod \"0580afa6-ca5b-412b-98cb-734acd556bb8\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.564658 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0580afa6-ca5b-412b-98cb-734acd556bb8-logs\") pod \"0580afa6-ca5b-412b-98cb-734acd556bb8\" (UID: \"0580afa6-ca5b-412b-98cb-734acd556bb8\") " Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.566020 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0580afa6-ca5b-412b-98cb-734acd556bb8-logs" (OuterVolumeSpecName: "logs") pod "0580afa6-ca5b-412b-98cb-734acd556bb8" (UID: "0580afa6-ca5b-412b-98cb-734acd556bb8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.570262 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0580afa6-ca5b-412b-98cb-734acd556bb8-kube-api-access-n2g44" (OuterVolumeSpecName: "kube-api-access-n2g44") pod "0580afa6-ca5b-412b-98cb-734acd556bb8" (UID: "0580afa6-ca5b-412b-98cb-734acd556bb8"). InnerVolumeSpecName "kube-api-access-n2g44". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.600379 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0580afa6-ca5b-412b-98cb-734acd556bb8" (UID: "0580afa6-ca5b-412b-98cb-734acd556bb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.618165 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-config-data" (OuterVolumeSpecName: "config-data") pod "0580afa6-ca5b-412b-98cb-734acd556bb8" (UID: "0580afa6-ca5b-412b-98cb-734acd556bb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.624051 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0580afa6-ca5b-412b-98cb-734acd556bb8" (UID: "0580afa6-ca5b-412b-98cb-734acd556bb8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.666575 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.666609 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.666618 4802 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0580afa6-ca5b-412b-98cb-734acd556bb8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.666630 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2g44\" (UniqueName: \"kubernetes.io/projected/0580afa6-ca5b-412b-98cb-734acd556bb8-kube-api-access-n2g44\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.666652 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0580afa6-ca5b-412b-98cb-734acd556bb8-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.782034 4802 generic.go:334] "Generic (PLEG): container finished" podID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerID="ba022423de39b0350bd5873933471215781db1a1e83a712b6612e73d58420b1e" exitCode=0 Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.782088 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0580afa6-ca5b-412b-98cb-734acd556bb8","Type":"ContainerDied","Data":"ba022423de39b0350bd5873933471215781db1a1e83a712b6612e73d58420b1e"} Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.782125 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.782153 4802 scope.go:117] "RemoveContainer" containerID="ba022423de39b0350bd5873933471215781db1a1e83a712b6612e73d58420b1e" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.782139 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0580afa6-ca5b-412b-98cb-734acd556bb8","Type":"ContainerDied","Data":"1dcf666c9b9a885038e8c5cf6302cd019f404ed4dba23a4f0b1043697d1e55aa"} Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.793796 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"baf7828c-edbb-4a9c-aadf-f52ecea9097e","Type":"ContainerStarted","Data":"2813b2b988664c0c21076177d353d6ac250fccd9f2f299b19b20b776497dce21"} Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.793853 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"baf7828c-edbb-4a9c-aadf-f52ecea9097e","Type":"ContainerStarted","Data":"d452d3033b759f1bc98460b2f6cef473d1a65677786616f262a0d8f24476cea7"} Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.804909 4802 scope.go:117] "RemoveContainer" containerID="ea1b1158cfa0c7afb65c540faf97bcf7b66e23e640b71305665f78abbbce3a93" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.813803 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.813786744 podStartE2EDuration="1.813786744s" podCreationTimestamp="2025-10-04 05:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:09:43.811809128 +0000 UTC m=+1426.219809763" watchObservedRunningTime="2025-10-04 05:09:43.813786744 +0000 UTC m=+1426.221787369" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.835705 4802 scope.go:117] "RemoveContainer" containerID="ba022423de39b0350bd5873933471215781db1a1e83a712b6612e73d58420b1e" Oct 04 05:09:43 crc kubenswrapper[4802]: E1004 05:09:43.836197 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba022423de39b0350bd5873933471215781db1a1e83a712b6612e73d58420b1e\": container with ID starting with ba022423de39b0350bd5873933471215781db1a1e83a712b6612e73d58420b1e not found: ID does not exist" containerID="ba022423de39b0350bd5873933471215781db1a1e83a712b6612e73d58420b1e" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.836241 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba022423de39b0350bd5873933471215781db1a1e83a712b6612e73d58420b1e"} err="failed to get container status \"ba022423de39b0350bd5873933471215781db1a1e83a712b6612e73d58420b1e\": rpc error: code = NotFound desc = could not find container \"ba022423de39b0350bd5873933471215781db1a1e83a712b6612e73d58420b1e\": container with ID starting with ba022423de39b0350bd5873933471215781db1a1e83a712b6612e73d58420b1e not found: ID does not exist" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.836271 4802 scope.go:117] "RemoveContainer" containerID="ea1b1158cfa0c7afb65c540faf97bcf7b66e23e640b71305665f78abbbce3a93" Oct 04 05:09:43 crc kubenswrapper[4802]: E1004 05:09:43.836690 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea1b1158cfa0c7afb65c540faf97bcf7b66e23e640b71305665f78abbbce3a93\": container with ID starting with ea1b1158cfa0c7afb65c540faf97bcf7b66e23e640b71305665f78abbbce3a93 not found: ID does not exist" containerID="ea1b1158cfa0c7afb65c540faf97bcf7b66e23e640b71305665f78abbbce3a93" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.836743 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1b1158cfa0c7afb65c540faf97bcf7b66e23e640b71305665f78abbbce3a93"} err="failed to get container status \"ea1b1158cfa0c7afb65c540faf97bcf7b66e23e640b71305665f78abbbce3a93\": rpc error: code = NotFound desc = could not find container \"ea1b1158cfa0c7afb65c540faf97bcf7b66e23e640b71305665f78abbbce3a93\": container with ID starting with ea1b1158cfa0c7afb65c540faf97bcf7b66e23e640b71305665f78abbbce3a93 not found: ID does not exist" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.849487 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.867826 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.892870 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:09:43 crc kubenswrapper[4802]: E1004 05:09:43.893310 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerName="nova-metadata-metadata" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.893321 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerName="nova-metadata-metadata" Oct 04 05:09:43 crc kubenswrapper[4802]: E1004 05:09:43.893350 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerName="nova-metadata-log" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.893355 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerName="nova-metadata-log" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.893534 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerName="nova-metadata-log" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.893553 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="0580afa6-ca5b-412b-98cb-734acd556bb8" containerName="nova-metadata-metadata" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.894409 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.894492 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.904353 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.906241 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.997099 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/05062b6a-0940-429e-abb8-b7108f6a9e9e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.997183 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-258jk\" (UniqueName: \"kubernetes.io/projected/05062b6a-0940-429e-abb8-b7108f6a9e9e-kube-api-access-258jk\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.997307 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05062b6a-0940-429e-abb8-b7108f6a9e9e-logs\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.997349 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05062b6a-0940-429e-abb8-b7108f6a9e9e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:43 crc kubenswrapper[4802]: I1004 05:09:43.997461 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05062b6a-0940-429e-abb8-b7108f6a9e9e-config-data\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.099243 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05062b6a-0940-429e-abb8-b7108f6a9e9e-logs\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.099523 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05062b6a-0940-429e-abb8-b7108f6a9e9e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.099685 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05062b6a-0940-429e-abb8-b7108f6a9e9e-logs\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.099878 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05062b6a-0940-429e-abb8-b7108f6a9e9e-config-data\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.100003 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/05062b6a-0940-429e-abb8-b7108f6a9e9e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.100091 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-258jk\" (UniqueName: \"kubernetes.io/projected/05062b6a-0940-429e-abb8-b7108f6a9e9e-kube-api-access-258jk\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.103287 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/05062b6a-0940-429e-abb8-b7108f6a9e9e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.103721 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05062b6a-0940-429e-abb8-b7108f6a9e9e-config-data\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.103848 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05062b6a-0940-429e-abb8-b7108f6a9e9e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.118012 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-258jk\" (UniqueName: \"kubernetes.io/projected/05062b6a-0940-429e-abb8-b7108f6a9e9e-kube-api-access-258jk\") pod \"nova-metadata-0\" (UID: \"05062b6a-0940-429e-abb8-b7108f6a9e9e\") " pod="openstack/nova-metadata-0" Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.215382 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.378994 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0580afa6-ca5b-412b-98cb-734acd556bb8" path="/var/lib/kubelet/pods/0580afa6-ca5b-412b-98cb-734acd556bb8/volumes" Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.651463 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 04 05:09:44 crc kubenswrapper[4802]: W1004 05:09:44.651478 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05062b6a_0940_429e_abb8_b7108f6a9e9e.slice/crio-5f8c44ff74ebbea4d292c2f5d2fa5049c6a036db12d5ec31af6a1103683da483 WatchSource:0}: Error finding container 5f8c44ff74ebbea4d292c2f5d2fa5049c6a036db12d5ec31af6a1103683da483: Status 404 returned error can't find the container with id 5f8c44ff74ebbea4d292c2f5d2fa5049c6a036db12d5ec31af6a1103683da483 Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.810575 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05062b6a-0940-429e-abb8-b7108f6a9e9e","Type":"ContainerStarted","Data":"0a5064573de63a52c145a592c7c311941b14d855a60eea48a7181deedb7864c7"} Oct 04 05:09:44 crc kubenswrapper[4802]: I1004 05:09:44.812264 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05062b6a-0940-429e-abb8-b7108f6a9e9e","Type":"ContainerStarted","Data":"5f8c44ff74ebbea4d292c2f5d2fa5049c6a036db12d5ec31af6a1103683da483"} Oct 04 05:09:45 crc kubenswrapper[4802]: I1004 05:09:45.822996 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05062b6a-0940-429e-abb8-b7108f6a9e9e","Type":"ContainerStarted","Data":"38fbf81609a5aee7c36049f1845c3258088a1c80088bd148faf47bbad101b68a"} Oct 04 05:09:45 crc kubenswrapper[4802]: I1004 05:09:45.848504 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.848486813 podStartE2EDuration="2.848486813s" podCreationTimestamp="2025-10-04 05:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:09:45.845773456 +0000 UTC m=+1428.253774081" watchObservedRunningTime="2025-10-04 05:09:45.848486813 +0000 UTC m=+1428.256487438" Oct 04 05:09:47 crc kubenswrapper[4802]: I1004 05:09:47.432858 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 04 05:09:49 crc kubenswrapper[4802]: I1004 05:09:49.216037 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 04 05:09:49 crc kubenswrapper[4802]: I1004 05:09:49.216327 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 04 05:09:51 crc kubenswrapper[4802]: I1004 05:09:51.130226 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 04 05:09:51 crc kubenswrapper[4802]: I1004 05:09:51.130556 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 04 05:09:52 crc kubenswrapper[4802]: I1004 05:09:52.145950 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3ecc5f0f-85cb-4fc7-b243-d81502fd473d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.187:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 04 05:09:52 crc kubenswrapper[4802]: I1004 05:09:52.145979 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3ecc5f0f-85cb-4fc7-b243-d81502fd473d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.187:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 04 05:09:52 crc kubenswrapper[4802]: I1004 05:09:52.433058 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 04 05:09:52 crc kubenswrapper[4802]: I1004 05:09:52.462398 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 04 05:09:52 crc kubenswrapper[4802]: E1004 05:09:52.896176 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9896243_f600_4461_ac5c_e22070c86c51.slice/crio-54f9088880a2568e7025f04c21ef446a7efacc652ae79fc5ed3c6e03355ce9c3\": RecentStats: unable to find data in memory cache]" Oct 04 05:09:52 crc kubenswrapper[4802]: I1004 05:09:52.909981 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 04 05:09:54 crc kubenswrapper[4802]: I1004 05:09:54.216859 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 04 05:09:54 crc kubenswrapper[4802]: I1004 05:09:54.216948 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 04 05:09:55 crc kubenswrapper[4802]: I1004 05:09:55.235868 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="05062b6a-0940-429e-abb8-b7108f6a9e9e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 04 05:09:55 crc kubenswrapper[4802]: I1004 05:09:55.235895 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="05062b6a-0940-429e-abb8-b7108f6a9e9e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 04 05:09:59 crc kubenswrapper[4802]: I1004 05:09:59.059126 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 04 05:10:01 crc kubenswrapper[4802]: I1004 05:10:01.137821 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 04 05:10:01 crc kubenswrapper[4802]: I1004 05:10:01.138160 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 04 05:10:01 crc kubenswrapper[4802]: I1004 05:10:01.138682 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 04 05:10:01 crc kubenswrapper[4802]: I1004 05:10:01.138704 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 04 05:10:01 crc kubenswrapper[4802]: I1004 05:10:01.148113 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 04 05:10:01 crc kubenswrapper[4802]: I1004 05:10:01.148248 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 04 05:10:01 crc kubenswrapper[4802]: I1004 05:10:01.786981 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6dg7r"] Oct 04 05:10:01 crc kubenswrapper[4802]: I1004 05:10:01.789423 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:01 crc kubenswrapper[4802]: I1004 05:10:01.797386 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6dg7r"] Oct 04 05:10:01 crc kubenswrapper[4802]: I1004 05:10:01.940483 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af6a514-15e1-43f4-b41f-7654cc1cbb30-catalog-content\") pod \"redhat-operators-6dg7r\" (UID: \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\") " pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:01 crc kubenswrapper[4802]: I1004 05:10:01.940769 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af6a514-15e1-43f4-b41f-7654cc1cbb30-utilities\") pod \"redhat-operators-6dg7r\" (UID: \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\") " pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:01 crc kubenswrapper[4802]: I1004 05:10:01.941021 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x56gv\" (UniqueName: \"kubernetes.io/projected/8af6a514-15e1-43f4-b41f-7654cc1cbb30-kube-api-access-x56gv\") pod \"redhat-operators-6dg7r\" (UID: \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\") " pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:02 crc kubenswrapper[4802]: I1004 05:10:02.042461 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x56gv\" (UniqueName: \"kubernetes.io/projected/8af6a514-15e1-43f4-b41f-7654cc1cbb30-kube-api-access-x56gv\") pod \"redhat-operators-6dg7r\" (UID: \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\") " pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:02 crc kubenswrapper[4802]: I1004 05:10:02.042580 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af6a514-15e1-43f4-b41f-7654cc1cbb30-catalog-content\") pod \"redhat-operators-6dg7r\" (UID: \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\") " pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:02 crc kubenswrapper[4802]: I1004 05:10:02.042681 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af6a514-15e1-43f4-b41f-7654cc1cbb30-utilities\") pod \"redhat-operators-6dg7r\" (UID: \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\") " pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:02 crc kubenswrapper[4802]: I1004 05:10:02.043721 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af6a514-15e1-43f4-b41f-7654cc1cbb30-catalog-content\") pod \"redhat-operators-6dg7r\" (UID: \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\") " pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:02 crc kubenswrapper[4802]: I1004 05:10:02.043815 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af6a514-15e1-43f4-b41f-7654cc1cbb30-utilities\") pod \"redhat-operators-6dg7r\" (UID: \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\") " pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:02 crc kubenswrapper[4802]: I1004 05:10:02.078500 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x56gv\" (UniqueName: \"kubernetes.io/projected/8af6a514-15e1-43f4-b41f-7654cc1cbb30-kube-api-access-x56gv\") pod \"redhat-operators-6dg7r\" (UID: \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\") " pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:02 crc kubenswrapper[4802]: I1004 05:10:02.124724 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:02 crc kubenswrapper[4802]: W1004 05:10:02.571292 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8af6a514_15e1_43f4_b41f_7654cc1cbb30.slice/crio-7cf88b05fd4e6181d5f3874d2d1a7c16d10c0c4128b2f39670b49a41976ec038 WatchSource:0}: Error finding container 7cf88b05fd4e6181d5f3874d2d1a7c16d10c0c4128b2f39670b49a41976ec038: Status 404 returned error can't find the container with id 7cf88b05fd4e6181d5f3874d2d1a7c16d10c0c4128b2f39670b49a41976ec038 Oct 04 05:10:02 crc kubenswrapper[4802]: I1004 05:10:02.579236 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6dg7r"] Oct 04 05:10:02 crc kubenswrapper[4802]: I1004 05:10:02.981072 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dg7r" event={"ID":"8af6a514-15e1-43f4-b41f-7654cc1cbb30","Type":"ContainerStarted","Data":"7cf88b05fd4e6181d5f3874d2d1a7c16d10c0c4128b2f39670b49a41976ec038"} Oct 04 05:10:03 crc kubenswrapper[4802]: I1004 05:10:03.991160 4802 generic.go:334] "Generic (PLEG): container finished" podID="8af6a514-15e1-43f4-b41f-7654cc1cbb30" containerID="b8daffab2d563b60689de8964231874eb06989a5134c409ff3347ddf5d10792c" exitCode=0 Oct 04 05:10:03 crc kubenswrapper[4802]: I1004 05:10:03.991308 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dg7r" event={"ID":"8af6a514-15e1-43f4-b41f-7654cc1cbb30","Type":"ContainerDied","Data":"b8daffab2d563b60689de8964231874eb06989a5134c409ff3347ddf5d10792c"} Oct 04 05:10:04 crc kubenswrapper[4802]: I1004 05:10:04.220882 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 04 05:10:04 crc kubenswrapper[4802]: I1004 05:10:04.221513 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 04 05:10:04 crc kubenswrapper[4802]: I1004 05:10:04.224651 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 04 05:10:05 crc kubenswrapper[4802]: I1004 05:10:05.006510 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 04 05:10:06 crc kubenswrapper[4802]: I1004 05:10:06.009986 4802 generic.go:334] "Generic (PLEG): container finished" podID="8af6a514-15e1-43f4-b41f-7654cc1cbb30" containerID="bc41617b30b2f2a6a4a1ab52a7f8a444aba78cb2304a7abd8e8e87e75d259eb6" exitCode=0 Oct 04 05:10:06 crc kubenswrapper[4802]: I1004 05:10:06.010089 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dg7r" event={"ID":"8af6a514-15e1-43f4-b41f-7654cc1cbb30","Type":"ContainerDied","Data":"bc41617b30b2f2a6a4a1ab52a7f8a444aba78cb2304a7abd8e8e87e75d259eb6"} Oct 04 05:10:08 crc kubenswrapper[4802]: I1004 05:10:08.028147 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dg7r" event={"ID":"8af6a514-15e1-43f4-b41f-7654cc1cbb30","Type":"ContainerStarted","Data":"e9234b24fdf0362653dd7ed0c6540f180d90af2fe7f40a04a38dd11155ff563f"} Oct 04 05:10:08 crc kubenswrapper[4802]: I1004 05:10:08.052510 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6dg7r" podStartSLOduration=4.192100366 podStartE2EDuration="7.052495726s" podCreationTimestamp="2025-10-04 05:10:01 +0000 UTC" firstStartedPulling="2025-10-04 05:10:03.993068713 +0000 UTC m=+1446.401069338" lastFinishedPulling="2025-10-04 05:10:06.853464063 +0000 UTC m=+1449.261464698" observedRunningTime="2025-10-04 05:10:08.049333566 +0000 UTC m=+1450.457334191" watchObservedRunningTime="2025-10-04 05:10:08.052495726 +0000 UTC m=+1450.460496351" Oct 04 05:10:12 crc kubenswrapper[4802]: I1004 05:10:12.125864 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:12 crc kubenswrapper[4802]: I1004 05:10:12.126407 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:12 crc kubenswrapper[4802]: I1004 05:10:12.187109 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:13 crc kubenswrapper[4802]: I1004 05:10:13.115782 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:13 crc kubenswrapper[4802]: I1004 05:10:13.160916 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6dg7r"] Oct 04 05:10:14 crc kubenswrapper[4802]: I1004 05:10:14.133596 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 04 05:10:15 crc kubenswrapper[4802]: I1004 05:10:15.051699 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 04 05:10:15 crc kubenswrapper[4802]: I1004 05:10:15.088365 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6dg7r" podUID="8af6a514-15e1-43f4-b41f-7654cc1cbb30" containerName="registry-server" containerID="cri-o://e9234b24fdf0362653dd7ed0c6540f180d90af2fe7f40a04a38dd11155ff563f" gracePeriod=2 Oct 04 05:10:17 crc kubenswrapper[4802]: I1004 05:10:17.112369 4802 generic.go:334] "Generic (PLEG): container finished" podID="8af6a514-15e1-43f4-b41f-7654cc1cbb30" containerID="e9234b24fdf0362653dd7ed0c6540f180d90af2fe7f40a04a38dd11155ff563f" exitCode=0 Oct 04 05:10:17 crc kubenswrapper[4802]: I1004 05:10:17.112482 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dg7r" event={"ID":"8af6a514-15e1-43f4-b41f-7654cc1cbb30","Type":"ContainerDied","Data":"e9234b24fdf0362653dd7ed0c6540f180d90af2fe7f40a04a38dd11155ff563f"} Oct 04 05:10:17 crc kubenswrapper[4802]: I1004 05:10:17.423062 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:17 crc kubenswrapper[4802]: I1004 05:10:17.515428 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af6a514-15e1-43f4-b41f-7654cc1cbb30-catalog-content\") pod \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\" (UID: \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\") " Oct 04 05:10:17 crc kubenswrapper[4802]: I1004 05:10:17.515614 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af6a514-15e1-43f4-b41f-7654cc1cbb30-utilities\") pod \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\" (UID: \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\") " Oct 04 05:10:17 crc kubenswrapper[4802]: I1004 05:10:17.515777 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x56gv\" (UniqueName: \"kubernetes.io/projected/8af6a514-15e1-43f4-b41f-7654cc1cbb30-kube-api-access-x56gv\") pod \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\" (UID: \"8af6a514-15e1-43f4-b41f-7654cc1cbb30\") " Oct 04 05:10:17 crc kubenswrapper[4802]: I1004 05:10:17.521091 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8af6a514-15e1-43f4-b41f-7654cc1cbb30-utilities" (OuterVolumeSpecName: "utilities") pod "8af6a514-15e1-43f4-b41f-7654cc1cbb30" (UID: "8af6a514-15e1-43f4-b41f-7654cc1cbb30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:10:17 crc kubenswrapper[4802]: I1004 05:10:17.572998 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af6a514-15e1-43f4-b41f-7654cc1cbb30-kube-api-access-x56gv" (OuterVolumeSpecName: "kube-api-access-x56gv") pod "8af6a514-15e1-43f4-b41f-7654cc1cbb30" (UID: "8af6a514-15e1-43f4-b41f-7654cc1cbb30"). InnerVolumeSpecName "kube-api-access-x56gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:10:17 crc kubenswrapper[4802]: I1004 05:10:17.620740 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x56gv\" (UniqueName: \"kubernetes.io/projected/8af6a514-15e1-43f4-b41f-7654cc1cbb30-kube-api-access-x56gv\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:17 crc kubenswrapper[4802]: I1004 05:10:17.620781 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af6a514-15e1-43f4-b41f-7654cc1cbb30-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:17 crc kubenswrapper[4802]: I1004 05:10:17.662782 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8af6a514-15e1-43f4-b41f-7654cc1cbb30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8af6a514-15e1-43f4-b41f-7654cc1cbb30" (UID: "8af6a514-15e1-43f4-b41f-7654cc1cbb30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:10:17 crc kubenswrapper[4802]: I1004 05:10:17.723191 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af6a514-15e1-43f4-b41f-7654cc1cbb30-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:18 crc kubenswrapper[4802]: I1004 05:10:18.131408 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dg7r" event={"ID":"8af6a514-15e1-43f4-b41f-7654cc1cbb30","Type":"ContainerDied","Data":"7cf88b05fd4e6181d5f3874d2d1a7c16d10c0c4128b2f39670b49a41976ec038"} Oct 04 05:10:18 crc kubenswrapper[4802]: I1004 05:10:18.131472 4802 scope.go:117] "RemoveContainer" containerID="e9234b24fdf0362653dd7ed0c6540f180d90af2fe7f40a04a38dd11155ff563f" Oct 04 05:10:18 crc kubenswrapper[4802]: I1004 05:10:18.131489 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dg7r" Oct 04 05:10:18 crc kubenswrapper[4802]: I1004 05:10:18.164767 4802 scope.go:117] "RemoveContainer" containerID="bc41617b30b2f2a6a4a1ab52a7f8a444aba78cb2304a7abd8e8e87e75d259eb6" Oct 04 05:10:18 crc kubenswrapper[4802]: I1004 05:10:18.165046 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6dg7r"] Oct 04 05:10:18 crc kubenswrapper[4802]: I1004 05:10:18.174720 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6dg7r"] Oct 04 05:10:18 crc kubenswrapper[4802]: I1004 05:10:18.186598 4802 scope.go:117] "RemoveContainer" containerID="b8daffab2d563b60689de8964231874eb06989a5134c409ff3347ddf5d10792c" Oct 04 05:10:18 crc kubenswrapper[4802]: I1004 05:10:18.372786 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8af6a514-15e1-43f4-b41f-7654cc1cbb30" path="/var/lib/kubelet/pods/8af6a514-15e1-43f4-b41f-7654cc1cbb30/volumes" Oct 04 05:10:18 crc kubenswrapper[4802]: I1004 05:10:18.687025 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="cf0ca60a-0bbc-41eb-bb00-c32d500506b1" containerName="rabbitmq" containerID="cri-o://60456b7a1a9871496824f0469c232d699d368b16662160fb332db0b69b7018af" gracePeriod=604796 Oct 04 05:10:19 crc kubenswrapper[4802]: I1004 05:10:19.894816 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" containerName="rabbitmq" containerID="cri-o://474eac2cd2061373c309920f72599c82b718bb290c74a858eeac1043eefc3001" gracePeriod=604796 Oct 04 05:10:22 crc kubenswrapper[4802]: I1004 05:10:22.662694 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:10:22 crc kubenswrapper[4802]: I1004 05:10:22.663047 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.210913 4802 generic.go:334] "Generic (PLEG): container finished" podID="cf0ca60a-0bbc-41eb-bb00-c32d500506b1" containerID="60456b7a1a9871496824f0469c232d699d368b16662160fb332db0b69b7018af" exitCode=0 Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.211000 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf0ca60a-0bbc-41eb-bb00-c32d500506b1","Type":"ContainerDied","Data":"60456b7a1a9871496824f0469c232d699d368b16662160fb332db0b69b7018af"} Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.211629 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf0ca60a-0bbc-41eb-bb00-c32d500506b1","Type":"ContainerDied","Data":"e209d3a0a0808c3bfa48d59569fa2cc3ac40744b142dfae696232d46acf93b11"} Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.211679 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e209d3a0a0808c3bfa48d59569fa2cc3ac40744b142dfae696232d46acf93b11" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.243015 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.358018 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfvsd\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-kube-api-access-dfvsd\") pod \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.358103 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-tls\") pod \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.358123 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-erlang-cookie-secret\") pod \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.358174 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-pod-info\") pod \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.358199 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-plugins-conf\") pod \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.358251 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.358337 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-confd\") pod \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.358359 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-server-conf\") pod \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.358425 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-config-data\") pod \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.358496 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-plugins\") pod \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.358542 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-erlang-cookie\") pod \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\" (UID: \"cf0ca60a-0bbc-41eb-bb00-c32d500506b1\") " Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.359352 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cf0ca60a-0bbc-41eb-bb00-c32d500506b1" (UID: "cf0ca60a-0bbc-41eb-bb00-c32d500506b1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.366259 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cf0ca60a-0bbc-41eb-bb00-c32d500506b1" (UID: "cf0ca60a-0bbc-41eb-bb00-c32d500506b1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.367148 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cf0ca60a-0bbc-41eb-bb00-c32d500506b1" (UID: "cf0ca60a-0bbc-41eb-bb00-c32d500506b1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.367976 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cf0ca60a-0bbc-41eb-bb00-c32d500506b1" (UID: "cf0ca60a-0bbc-41eb-bb00-c32d500506b1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.368831 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "cf0ca60a-0bbc-41eb-bb00-c32d500506b1" (UID: "cf0ca60a-0bbc-41eb-bb00-c32d500506b1"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.368997 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cf0ca60a-0bbc-41eb-bb00-c32d500506b1" (UID: "cf0ca60a-0bbc-41eb-bb00-c32d500506b1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.369039 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-kube-api-access-dfvsd" (OuterVolumeSpecName: "kube-api-access-dfvsd") pod "cf0ca60a-0bbc-41eb-bb00-c32d500506b1" (UID: "cf0ca60a-0bbc-41eb-bb00-c32d500506b1"). InnerVolumeSpecName "kube-api-access-dfvsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.369811 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-pod-info" (OuterVolumeSpecName: "pod-info") pod "cf0ca60a-0bbc-41eb-bb00-c32d500506b1" (UID: "cf0ca60a-0bbc-41eb-bb00-c32d500506b1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.390075 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-config-data" (OuterVolumeSpecName: "config-data") pod "cf0ca60a-0bbc-41eb-bb00-c32d500506b1" (UID: "cf0ca60a-0bbc-41eb-bb00-c32d500506b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.417952 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-server-conf" (OuterVolumeSpecName: "server-conf") pod "cf0ca60a-0bbc-41eb-bb00-c32d500506b1" (UID: "cf0ca60a-0bbc-41eb-bb00-c32d500506b1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.460455 4802 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.460489 4802 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.460504 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfvsd\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-kube-api-access-dfvsd\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.460515 4802 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.460525 4802 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.460537 4802 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-pod-info\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.460546 4802 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.460566 4802 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.460576 4802 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-server-conf\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.460586 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.474048 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cf0ca60a-0bbc-41eb-bb00-c32d500506b1" (UID: "cf0ca60a-0bbc-41eb-bb00-c32d500506b1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.488615 4802 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.563945 4802 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:25 crc kubenswrapper[4802]: I1004 05:10:25.563989 4802 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf0ca60a-0bbc-41eb-bb00-c32d500506b1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.228467 4802 generic.go:334] "Generic (PLEG): container finished" podID="78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" containerID="474eac2cd2061373c309920f72599c82b718bb290c74a858eeac1043eefc3001" exitCode=0 Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.228805 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.228573 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea","Type":"ContainerDied","Data":"474eac2cd2061373c309920f72599c82b718bb290c74a858eeac1043eefc3001"} Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.293826 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.337734 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.353146 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 04 05:10:26 crc kubenswrapper[4802]: E1004 05:10:26.353611 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0ca60a-0bbc-41eb-bb00-c32d500506b1" containerName="rabbitmq" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.353653 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0ca60a-0bbc-41eb-bb00-c32d500506b1" containerName="rabbitmq" Oct 04 05:10:26 crc kubenswrapper[4802]: E1004 05:10:26.353669 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af6a514-15e1-43f4-b41f-7654cc1cbb30" containerName="extract-content" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.353678 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af6a514-15e1-43f4-b41f-7654cc1cbb30" containerName="extract-content" Oct 04 05:10:26 crc kubenswrapper[4802]: E1004 05:10:26.353713 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af6a514-15e1-43f4-b41f-7654cc1cbb30" containerName="registry-server" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.353720 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af6a514-15e1-43f4-b41f-7654cc1cbb30" containerName="registry-server" Oct 04 05:10:26 crc kubenswrapper[4802]: E1004 05:10:26.353738 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af6a514-15e1-43f4-b41f-7654cc1cbb30" containerName="extract-utilities" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.353746 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af6a514-15e1-43f4-b41f-7654cc1cbb30" containerName="extract-utilities" Oct 04 05:10:26 crc kubenswrapper[4802]: E1004 05:10:26.353762 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0ca60a-0bbc-41eb-bb00-c32d500506b1" containerName="setup-container" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.353769 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0ca60a-0bbc-41eb-bb00-c32d500506b1" containerName="setup-container" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.353982 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af6a514-15e1-43f4-b41f-7654cc1cbb30" containerName="registry-server" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.354019 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0ca60a-0bbc-41eb-bb00-c32d500506b1" containerName="rabbitmq" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.355299 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.357153 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.357607 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.357767 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.358059 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.358335 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.358593 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5zn92" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.358631 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.376663 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf0ca60a-0bbc-41eb-bb00-c32d500506b1" path="/var/lib/kubelet/pods/cf0ca60a-0bbc-41eb-bb00-c32d500506b1/volumes" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.377407 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.484531 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad74ddca-2d42-4c28-8147-6088b9876fa1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.484600 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad74ddca-2d42-4c28-8147-6088b9876fa1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.484698 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad74ddca-2d42-4c28-8147-6088b9876fa1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.484734 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad74ddca-2d42-4c28-8147-6088b9876fa1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.484766 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad74ddca-2d42-4c28-8147-6088b9876fa1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.484876 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad74ddca-2d42-4c28-8147-6088b9876fa1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.484928 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmnhs\" (UniqueName: \"kubernetes.io/projected/ad74ddca-2d42-4c28-8147-6088b9876fa1-kube-api-access-gmnhs\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.484955 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad74ddca-2d42-4c28-8147-6088b9876fa1-config-data\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.484979 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.485238 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad74ddca-2d42-4c28-8147-6088b9876fa1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.485270 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad74ddca-2d42-4c28-8147-6088b9876fa1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.487446 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.586941 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-pod-info\") pod \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.586996 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-confd\") pod \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587030 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-plugins-conf\") pod \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587049 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587086 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-erlang-cookie-secret\") pod \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587144 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-server-conf\") pod \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587193 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-erlang-cookie\") pod \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587216 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stk55\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-kube-api-access-stk55\") pod \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587231 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-config-data\") pod \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587260 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-tls\") pod \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587287 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-plugins\") pod \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\" (UID: \"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea\") " Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587536 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad74ddca-2d42-4c28-8147-6088b9876fa1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587568 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad74ddca-2d42-4c28-8147-6088b9876fa1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587591 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad74ddca-2d42-4c28-8147-6088b9876fa1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587609 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad74ddca-2d42-4c28-8147-6088b9876fa1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587650 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmnhs\" (UniqueName: \"kubernetes.io/projected/ad74ddca-2d42-4c28-8147-6088b9876fa1-kube-api-access-gmnhs\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587674 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad74ddca-2d42-4c28-8147-6088b9876fa1-config-data\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587690 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587739 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad74ddca-2d42-4c28-8147-6088b9876fa1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587753 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad74ddca-2d42-4c28-8147-6088b9876fa1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587788 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad74ddca-2d42-4c28-8147-6088b9876fa1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.587813 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad74ddca-2d42-4c28-8147-6088b9876fa1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.588326 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad74ddca-2d42-4c28-8147-6088b9876fa1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.588927 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad74ddca-2d42-4c28-8147-6088b9876fa1-config-data\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.589442 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" (UID: "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.590244 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad74ddca-2d42-4c28-8147-6088b9876fa1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.590440 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.592447 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad74ddca-2d42-4c28-8147-6088b9876fa1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.593240 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad74ddca-2d42-4c28-8147-6088b9876fa1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.593605 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" (UID: "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.593943 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" (UID: "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.594336 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" (UID: "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.594467 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" (UID: "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.594629 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad74ddca-2d42-4c28-8147-6088b9876fa1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.594778 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad74ddca-2d42-4c28-8147-6088b9876fa1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.598153 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad74ddca-2d42-4c28-8147-6088b9876fa1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.598301 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" (UID: "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.598370 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-kube-api-access-stk55" (OuterVolumeSpecName: "kube-api-access-stk55") pod "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" (UID: "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea"). InnerVolumeSpecName "kube-api-access-stk55". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.598609 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad74ddca-2d42-4c28-8147-6088b9876fa1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.600981 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-pod-info" (OuterVolumeSpecName: "pod-info") pod "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" (UID: "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.616039 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmnhs\" (UniqueName: \"kubernetes.io/projected/ad74ddca-2d42-4c28-8147-6088b9876fa1-kube-api-access-gmnhs\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.626952 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-config-data" (OuterVolumeSpecName: "config-data") pod "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" (UID: "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.653761 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"ad74ddca-2d42-4c28-8147-6088b9876fa1\") " pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.680253 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.685310 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-server-conf" (OuterVolumeSpecName: "server-conf") pod "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" (UID: "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.690121 4802 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-server-conf\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.690152 4802 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.690162 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.690170 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stk55\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-kube-api-access-stk55\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.690179 4802 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.690186 4802 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.690194 4802 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-pod-info\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.690202 4802 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.690224 4802 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.690233 4802 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.714911 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" (UID: "78c4949d-d61b-4d3e-aa27-7c8bc4da81ea"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.716211 4802 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.792091 4802 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:26 crc kubenswrapper[4802]: I1004 05:10:26.792121 4802 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.106846 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.238583 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ad74ddca-2d42-4c28-8147-6088b9876fa1","Type":"ContainerStarted","Data":"510fcdf282992dc8b0b6bba2ba92e377e478f113059093de29916d463df8e055"} Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.240736 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"78c4949d-d61b-4d3e-aa27-7c8bc4da81ea","Type":"ContainerDied","Data":"f892fff3613f332bdab7b0332697a714c8b9cf8a335f9aaf7eff4984144c8d05"} Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.240768 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.240787 4802 scope.go:117] "RemoveContainer" containerID="474eac2cd2061373c309920f72599c82b718bb290c74a858eeac1043eefc3001" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.272458 4802 scope.go:117] "RemoveContainer" containerID="a6014bcef3cbb5ec832574558747ef4a77d2d25f64247c10c2a30cebfaee1b8e" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.296110 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.302433 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.331905 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 04 05:10:27 crc kubenswrapper[4802]: E1004 05:10:27.332308 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" containerName="rabbitmq" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.332327 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" containerName="rabbitmq" Oct 04 05:10:27 crc kubenswrapper[4802]: E1004 05:10:27.332356 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" containerName="setup-container" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.332364 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" containerName="setup-container" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.332616 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" containerName="rabbitmq" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.333846 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.339802 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.340276 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.340541 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.340735 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.341026 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-l9459" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.341439 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.341691 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.347388 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.406938 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8zvg\" (UniqueName: \"kubernetes.io/projected/e886bcf5-c8ec-465d-87cc-22b905bec5da-kube-api-access-t8zvg\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.407026 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e886bcf5-c8ec-465d-87cc-22b905bec5da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.407083 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e886bcf5-c8ec-465d-87cc-22b905bec5da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.407103 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e886bcf5-c8ec-465d-87cc-22b905bec5da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.407152 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e886bcf5-c8ec-465d-87cc-22b905bec5da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.407243 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e886bcf5-c8ec-465d-87cc-22b905bec5da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.407265 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e886bcf5-c8ec-465d-87cc-22b905bec5da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.407337 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.407395 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e886bcf5-c8ec-465d-87cc-22b905bec5da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.407427 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e886bcf5-c8ec-465d-87cc-22b905bec5da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.407484 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e886bcf5-c8ec-465d-87cc-22b905bec5da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.508578 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8zvg\" (UniqueName: \"kubernetes.io/projected/e886bcf5-c8ec-465d-87cc-22b905bec5da-kube-api-access-t8zvg\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.508922 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e886bcf5-c8ec-465d-87cc-22b905bec5da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.508953 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e886bcf5-c8ec-465d-87cc-22b905bec5da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.508967 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e886bcf5-c8ec-465d-87cc-22b905bec5da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.508995 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e886bcf5-c8ec-465d-87cc-22b905bec5da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.509059 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e886bcf5-c8ec-465d-87cc-22b905bec5da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.509089 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e886bcf5-c8ec-465d-87cc-22b905bec5da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.509136 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.509188 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e886bcf5-c8ec-465d-87cc-22b905bec5da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.509225 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e886bcf5-c8ec-465d-87cc-22b905bec5da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.509253 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e886bcf5-c8ec-465d-87cc-22b905bec5da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.509932 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e886bcf5-c8ec-465d-87cc-22b905bec5da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.510058 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e886bcf5-c8ec-465d-87cc-22b905bec5da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.510697 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e886bcf5-c8ec-465d-87cc-22b905bec5da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.510970 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.511048 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e886bcf5-c8ec-465d-87cc-22b905bec5da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.511247 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e886bcf5-c8ec-465d-87cc-22b905bec5da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.514087 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e886bcf5-c8ec-465d-87cc-22b905bec5da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.514758 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e886bcf5-c8ec-465d-87cc-22b905bec5da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.515081 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e886bcf5-c8ec-465d-87cc-22b905bec5da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.515870 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e886bcf5-c8ec-465d-87cc-22b905bec5da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.536576 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8zvg\" (UniqueName: \"kubernetes.io/projected/e886bcf5-c8ec-465d-87cc-22b905bec5da-kube-api-access-t8zvg\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.537913 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e886bcf5-c8ec-465d-87cc-22b905bec5da\") " pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:27 crc kubenswrapper[4802]: I1004 05:10:27.664199 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:10:28 crc kubenswrapper[4802]: I1004 05:10:28.105130 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 04 05:10:28 crc kubenswrapper[4802]: I1004 05:10:28.257233 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e886bcf5-c8ec-465d-87cc-22b905bec5da","Type":"ContainerStarted","Data":"b4984c6c47c14cc5a271fbb1a954a03744821a9492125a550b979a6cee2fbaf2"} Oct 04 05:10:28 crc kubenswrapper[4802]: I1004 05:10:28.383419 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c4949d-d61b-4d3e-aa27-7c8bc4da81ea" path="/var/lib/kubelet/pods/78c4949d-d61b-4d3e-aa27-7c8bc4da81ea/volumes" Oct 04 05:10:29 crc kubenswrapper[4802]: I1004 05:10:29.268174 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ad74ddca-2d42-4c28-8147-6088b9876fa1","Type":"ContainerStarted","Data":"86bad09673d41f9e58d0399c1830b182b683a7461d426def0d1649b5876b07a9"} Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.187572 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-z4k2t"] Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.192416 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.196983 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.206830 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-z4k2t"] Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.255047 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-config\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.255136 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.255199 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.255314 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.255532 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4c4c\" (UniqueName: \"kubernetes.io/projected/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-kube-api-access-s4c4c\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.255600 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.277661 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e886bcf5-c8ec-465d-87cc-22b905bec5da","Type":"ContainerStarted","Data":"a501539c438a288c43faf127487891f41bc65f932de4f77ec58c8f2b59d4a411"} Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.357798 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4c4c\" (UniqueName: \"kubernetes.io/projected/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-kube-api-access-s4c4c\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.357872 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.358035 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-config\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.358197 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.358258 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.358323 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.359332 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.359426 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-config\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.359488 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.359551 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.359603 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.381103 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4c4c\" (UniqueName: \"kubernetes.io/projected/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-kube-api-access-s4c4c\") pod \"dnsmasq-dns-6447ccbd8f-z4k2t\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.519094 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:30 crc kubenswrapper[4802]: I1004 05:10:30.980487 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-z4k2t"] Oct 04 05:10:31 crc kubenswrapper[4802]: I1004 05:10:31.287816 4802 generic.go:334] "Generic (PLEG): container finished" podID="f869c1f0-23d5-4f51-a7d3-f33a2f63b050" containerID="1e91b9d04841570106021fb9ca420af98db3f01dbd07b308a8570f56cf46b186" exitCode=0 Oct 04 05:10:31 crc kubenswrapper[4802]: I1004 05:10:31.287938 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" event={"ID":"f869c1f0-23d5-4f51-a7d3-f33a2f63b050","Type":"ContainerDied","Data":"1e91b9d04841570106021fb9ca420af98db3f01dbd07b308a8570f56cf46b186"} Oct 04 05:10:31 crc kubenswrapper[4802]: I1004 05:10:31.288173 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" event={"ID":"f869c1f0-23d5-4f51-a7d3-f33a2f63b050","Type":"ContainerStarted","Data":"eff9678725248f1865a0ef680d7786aa151af90018d55c877247f70a5aa05906"} Oct 04 05:10:32 crc kubenswrapper[4802]: I1004 05:10:32.299413 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" event={"ID":"f869c1f0-23d5-4f51-a7d3-f33a2f63b050","Type":"ContainerStarted","Data":"4c0834857b7e44d7d12bf5cc7ab60b95d397860073a0f9649bac7a2593330349"} Oct 04 05:10:32 crc kubenswrapper[4802]: I1004 05:10:32.299842 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:32 crc kubenswrapper[4802]: I1004 05:10:32.329178 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" podStartSLOduration=2.329161256 podStartE2EDuration="2.329161256s" podCreationTimestamp="2025-10-04 05:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:10:32.321071196 +0000 UTC m=+1474.729071831" watchObservedRunningTime="2025-10-04 05:10:32.329161256 +0000 UTC m=+1474.737161881" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.520838 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.580436 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-4wll7"] Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.580700 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-4wll7" podUID="a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" containerName="dnsmasq-dns" containerID="cri-o://a1b342f50f2faa4dc256360676e4e542dd72d247b39df915b94b16cb3ececbfd" gracePeriod=10 Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.740303 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-z5ff7"] Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.742881 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.755610 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-z5ff7"] Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.858237 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.858548 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.858673 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.858818 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.858857 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-config\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.859017 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4x66\" (UniqueName: \"kubernetes.io/projected/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-kube-api-access-j4x66\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.963188 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.963249 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.963299 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.963325 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-config\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.963407 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4x66\" (UniqueName: \"kubernetes.io/projected/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-kube-api-access-j4x66\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.963485 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.964481 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.964623 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-config\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.964623 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.964773 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:40 crc kubenswrapper[4802]: I1004 05:10:40.964783 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.008758 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4x66\" (UniqueName: \"kubernetes.io/projected/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-kube-api-access-j4x66\") pod \"dnsmasq-dns-864d5fc68c-z5ff7\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.072188 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.097964 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.268196 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-ovsdbserver-nb\") pod \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.269137 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcbgt\" (UniqueName: \"kubernetes.io/projected/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-kube-api-access-qcbgt\") pod \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.269199 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-dns-svc\") pod \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.269246 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-config\") pod \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.269287 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-ovsdbserver-sb\") pod \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\" (UID: \"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77\") " Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.273444 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-kube-api-access-qcbgt" (OuterVolumeSpecName: "kube-api-access-qcbgt") pod "a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" (UID: "a0944ab8-e05e-4d57-ac11-1d81b8cbfd77"). InnerVolumeSpecName "kube-api-access-qcbgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.318074 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-config" (OuterVolumeSpecName: "config") pod "a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" (UID: "a0944ab8-e05e-4d57-ac11-1d81b8cbfd77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.326569 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" (UID: "a0944ab8-e05e-4d57-ac11-1d81b8cbfd77"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.331267 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" (UID: "a0944ab8-e05e-4d57-ac11-1d81b8cbfd77"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.332475 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" (UID: "a0944ab8-e05e-4d57-ac11-1d81b8cbfd77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.372022 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.372078 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.372149 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.372182 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.372205 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcbgt\" (UniqueName: \"kubernetes.io/projected/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77-kube-api-access-qcbgt\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.385562 4802 generic.go:334] "Generic (PLEG): container finished" podID="a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" containerID="a1b342f50f2faa4dc256360676e4e542dd72d247b39df915b94b16cb3ececbfd" exitCode=0 Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.385633 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-4wll7" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.385804 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-4wll7" event={"ID":"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77","Type":"ContainerDied","Data":"a1b342f50f2faa4dc256360676e4e542dd72d247b39df915b94b16cb3ececbfd"} Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.385841 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-4wll7" event={"ID":"a0944ab8-e05e-4d57-ac11-1d81b8cbfd77","Type":"ContainerDied","Data":"efe6bb2575384141404a7eb973cbdf0ee0632a181e6cd877dc8011b9dcb87de3"} Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.385860 4802 scope.go:117] "RemoveContainer" containerID="a1b342f50f2faa4dc256360676e4e542dd72d247b39df915b94b16cb3ececbfd" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.410857 4802 scope.go:117] "RemoveContainer" containerID="94611c693d92f43d69700800e03736ebb95443495561e6278ab6890dcef1ba7e" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.427317 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-4wll7"] Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.435289 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-4wll7"] Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.436791 4802 scope.go:117] "RemoveContainer" containerID="a1b342f50f2faa4dc256360676e4e542dd72d247b39df915b94b16cb3ececbfd" Oct 04 05:10:41 crc kubenswrapper[4802]: E1004 05:10:41.437205 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b342f50f2faa4dc256360676e4e542dd72d247b39df915b94b16cb3ececbfd\": container with ID starting with a1b342f50f2faa4dc256360676e4e542dd72d247b39df915b94b16cb3ececbfd not found: ID does not exist" containerID="a1b342f50f2faa4dc256360676e4e542dd72d247b39df915b94b16cb3ececbfd" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.437240 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b342f50f2faa4dc256360676e4e542dd72d247b39df915b94b16cb3ececbfd"} err="failed to get container status \"a1b342f50f2faa4dc256360676e4e542dd72d247b39df915b94b16cb3ececbfd\": rpc error: code = NotFound desc = could not find container \"a1b342f50f2faa4dc256360676e4e542dd72d247b39df915b94b16cb3ececbfd\": container with ID starting with a1b342f50f2faa4dc256360676e4e542dd72d247b39df915b94b16cb3ececbfd not found: ID does not exist" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.437261 4802 scope.go:117] "RemoveContainer" containerID="94611c693d92f43d69700800e03736ebb95443495561e6278ab6890dcef1ba7e" Oct 04 05:10:41 crc kubenswrapper[4802]: E1004 05:10:41.437473 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94611c693d92f43d69700800e03736ebb95443495561e6278ab6890dcef1ba7e\": container with ID starting with 94611c693d92f43d69700800e03736ebb95443495561e6278ab6890dcef1ba7e not found: ID does not exist" containerID="94611c693d92f43d69700800e03736ebb95443495561e6278ab6890dcef1ba7e" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.437500 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94611c693d92f43d69700800e03736ebb95443495561e6278ab6890dcef1ba7e"} err="failed to get container status \"94611c693d92f43d69700800e03736ebb95443495561e6278ab6890dcef1ba7e\": rpc error: code = NotFound desc = could not find container \"94611c693d92f43d69700800e03736ebb95443495561e6278ab6890dcef1ba7e\": container with ID starting with 94611c693d92f43d69700800e03736ebb95443495561e6278ab6890dcef1ba7e not found: ID does not exist" Oct 04 05:10:41 crc kubenswrapper[4802]: I1004 05:10:41.595672 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-z5ff7"] Oct 04 05:10:42 crc kubenswrapper[4802]: I1004 05:10:42.371876 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" path="/var/lib/kubelet/pods/a0944ab8-e05e-4d57-ac11-1d81b8cbfd77/volumes" Oct 04 05:10:42 crc kubenswrapper[4802]: I1004 05:10:42.398309 4802 generic.go:334] "Generic (PLEG): container finished" podID="f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" containerID="c70256d2de4599839b9bc55703e41b6def2adcd267d6efbe08e9d0281e87cf8b" exitCode=0 Oct 04 05:10:42 crc kubenswrapper[4802]: I1004 05:10:42.398362 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" event={"ID":"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1","Type":"ContainerDied","Data":"c70256d2de4599839b9bc55703e41b6def2adcd267d6efbe08e9d0281e87cf8b"} Oct 04 05:10:42 crc kubenswrapper[4802]: I1004 05:10:42.398386 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" event={"ID":"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1","Type":"ContainerStarted","Data":"acd7b6637e718d466f8aef662630999ca52c9893cd997275992e803bbc8145b5"} Oct 04 05:10:43 crc kubenswrapper[4802]: I1004 05:10:43.413280 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" event={"ID":"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1","Type":"ContainerStarted","Data":"b0b6ca717b84602d11fcbbe95b20f4d326049c6ad49450d83963d5e2c55dfb8c"} Oct 04 05:10:43 crc kubenswrapper[4802]: I1004 05:10:43.413742 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:43 crc kubenswrapper[4802]: I1004 05:10:43.444254 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" podStartSLOduration=3.444231383 podStartE2EDuration="3.444231383s" podCreationTimestamp="2025-10-04 05:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:10:43.435871855 +0000 UTC m=+1485.843872480" watchObservedRunningTime="2025-10-04 05:10:43.444231383 +0000 UTC m=+1485.852232008" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.073829 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.142755 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-z4k2t"] Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.142968 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" podUID="f869c1f0-23d5-4f51-a7d3-f33a2f63b050" containerName="dnsmasq-dns" containerID="cri-o://4c0834857b7e44d7d12bf5cc7ab60b95d397860073a0f9649bac7a2593330349" gracePeriod=10 Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.480702 4802 generic.go:334] "Generic (PLEG): container finished" podID="f869c1f0-23d5-4f51-a7d3-f33a2f63b050" containerID="4c0834857b7e44d7d12bf5cc7ab60b95d397860073a0f9649bac7a2593330349" exitCode=0 Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.481061 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" event={"ID":"f869c1f0-23d5-4f51-a7d3-f33a2f63b050","Type":"ContainerDied","Data":"4c0834857b7e44d7d12bf5cc7ab60b95d397860073a0f9649bac7a2593330349"} Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.586379 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.766397 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-ovsdbserver-nb\") pod \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.766455 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-openstack-edpm-ipam\") pod \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.766496 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4c4c\" (UniqueName: \"kubernetes.io/projected/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-kube-api-access-s4c4c\") pod \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.766593 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-ovsdbserver-sb\") pod \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.766670 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-dns-svc\") pod \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.766699 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-config\") pod \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\" (UID: \"f869c1f0-23d5-4f51-a7d3-f33a2f63b050\") " Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.772629 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-kube-api-access-s4c4c" (OuterVolumeSpecName: "kube-api-access-s4c4c") pod "f869c1f0-23d5-4f51-a7d3-f33a2f63b050" (UID: "f869c1f0-23d5-4f51-a7d3-f33a2f63b050"). InnerVolumeSpecName "kube-api-access-s4c4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.820897 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-config" (OuterVolumeSpecName: "config") pod "f869c1f0-23d5-4f51-a7d3-f33a2f63b050" (UID: "f869c1f0-23d5-4f51-a7d3-f33a2f63b050"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.831383 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f869c1f0-23d5-4f51-a7d3-f33a2f63b050" (UID: "f869c1f0-23d5-4f51-a7d3-f33a2f63b050"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.832749 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f869c1f0-23d5-4f51-a7d3-f33a2f63b050" (UID: "f869c1f0-23d5-4f51-a7d3-f33a2f63b050"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.837277 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f869c1f0-23d5-4f51-a7d3-f33a2f63b050" (UID: "f869c1f0-23d5-4f51-a7d3-f33a2f63b050"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.840997 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f869c1f0-23d5-4f51-a7d3-f33a2f63b050" (UID: "f869c1f0-23d5-4f51-a7d3-f33a2f63b050"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.869223 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.869280 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.869293 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.869305 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.869318 4802 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:51 crc kubenswrapper[4802]: I1004 05:10:51.869328 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4c4c\" (UniqueName: \"kubernetes.io/projected/f869c1f0-23d5-4f51-a7d3-f33a2f63b050-kube-api-access-s4c4c\") on node \"crc\" DevicePath \"\"" Oct 04 05:10:52 crc kubenswrapper[4802]: I1004 05:10:52.493502 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" event={"ID":"f869c1f0-23d5-4f51-a7d3-f33a2f63b050","Type":"ContainerDied","Data":"eff9678725248f1865a0ef680d7786aa151af90018d55c877247f70a5aa05906"} Oct 04 05:10:52 crc kubenswrapper[4802]: I1004 05:10:52.493568 4802 scope.go:117] "RemoveContainer" containerID="4c0834857b7e44d7d12bf5cc7ab60b95d397860073a0f9649bac7a2593330349" Oct 04 05:10:52 crc kubenswrapper[4802]: I1004 05:10:52.493588 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-z4k2t" Oct 04 05:10:52 crc kubenswrapper[4802]: I1004 05:10:52.518219 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-z4k2t"] Oct 04 05:10:52 crc kubenswrapper[4802]: I1004 05:10:52.528515 4802 scope.go:117] "RemoveContainer" containerID="1e91b9d04841570106021fb9ca420af98db3f01dbd07b308a8570f56cf46b186" Oct 04 05:10:52 crc kubenswrapper[4802]: I1004 05:10:52.534615 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-z4k2t"] Oct 04 05:10:52 crc kubenswrapper[4802]: I1004 05:10:52.662517 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:10:52 crc kubenswrapper[4802]: I1004 05:10:52.663217 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:10:54 crc kubenswrapper[4802]: I1004 05:10:54.369387 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f869c1f0-23d5-4f51-a7d3-f33a2f63b050" path="/var/lib/kubelet/pods/f869c1f0-23d5-4f51-a7d3-f33a2f63b050/volumes" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.360038 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl"] Oct 04 05:11:01 crc kubenswrapper[4802]: E1004 05:11:01.360898 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f869c1f0-23d5-4f51-a7d3-f33a2f63b050" containerName="init" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.360911 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f869c1f0-23d5-4f51-a7d3-f33a2f63b050" containerName="init" Oct 04 05:11:01 crc kubenswrapper[4802]: E1004 05:11:01.360925 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f869c1f0-23d5-4f51-a7d3-f33a2f63b050" containerName="dnsmasq-dns" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.360930 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f869c1f0-23d5-4f51-a7d3-f33a2f63b050" containerName="dnsmasq-dns" Oct 04 05:11:01 crc kubenswrapper[4802]: E1004 05:11:01.360941 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" containerName="init" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.360947 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" containerName="init" Oct 04 05:11:01 crc kubenswrapper[4802]: E1004 05:11:01.360972 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" containerName="dnsmasq-dns" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.360977 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" containerName="dnsmasq-dns" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.361153 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0944ab8-e05e-4d57-ac11-1d81b8cbfd77" containerName="dnsmasq-dns" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.361168 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="f869c1f0-23d5-4f51-a7d3-f33a2f63b050" containerName="dnsmasq-dns" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.361803 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.368254 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.368502 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.369123 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.369279 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.378812 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl"] Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.546868 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.546981 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnpnv\" (UniqueName: \"kubernetes.io/projected/5635be3d-08e4-4fd2-b3e4-488dda21dce7-kube-api-access-dnpnv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.547015 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.547109 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.602666 4802 generic.go:334] "Generic (PLEG): container finished" podID="ad74ddca-2d42-4c28-8147-6088b9876fa1" containerID="86bad09673d41f9e58d0399c1830b182b683a7461d426def0d1649b5876b07a9" exitCode=0 Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.602728 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ad74ddca-2d42-4c28-8147-6088b9876fa1","Type":"ContainerDied","Data":"86bad09673d41f9e58d0399c1830b182b683a7461d426def0d1649b5876b07a9"} Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.648769 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnpnv\" (UniqueName: \"kubernetes.io/projected/5635be3d-08e4-4fd2-b3e4-488dda21dce7-kube-api-access-dnpnv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.649121 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.649146 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.649272 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.653345 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.653802 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.658859 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.670064 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnpnv\" (UniqueName: \"kubernetes.io/projected/5635be3d-08e4-4fd2-b3e4-488dda21dce7-kube-api-access-dnpnv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:01 crc kubenswrapper[4802]: I1004 05:11:01.684483 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:02 crc kubenswrapper[4802]: I1004 05:11:02.244741 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl"] Oct 04 05:11:02 crc kubenswrapper[4802]: W1004 05:11:02.262462 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5635be3d_08e4_4fd2_b3e4_488dda21dce7.slice/crio-0e3cc234a9e05ad42cdd30937543da4cca41637f607479297aa3b9628dd72724 WatchSource:0}: Error finding container 0e3cc234a9e05ad42cdd30937543da4cca41637f607479297aa3b9628dd72724: Status 404 returned error can't find the container with id 0e3cc234a9e05ad42cdd30937543da4cca41637f607479297aa3b9628dd72724 Oct 04 05:11:02 crc kubenswrapper[4802]: I1004 05:11:02.613875 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" event={"ID":"5635be3d-08e4-4fd2-b3e4-488dda21dce7","Type":"ContainerStarted","Data":"0e3cc234a9e05ad42cdd30937543da4cca41637f607479297aa3b9628dd72724"} Oct 04 05:11:02 crc kubenswrapper[4802]: I1004 05:11:02.617170 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ad74ddca-2d42-4c28-8147-6088b9876fa1","Type":"ContainerStarted","Data":"08206bab7808d4cadabe0a3cb819de86f49790f8fee6a6dcb237bb2c4c3cabca"} Oct 04 05:11:02 crc kubenswrapper[4802]: I1004 05:11:02.617409 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 04 05:11:02 crc kubenswrapper[4802]: I1004 05:11:02.619249 4802 generic.go:334] "Generic (PLEG): container finished" podID="e886bcf5-c8ec-465d-87cc-22b905bec5da" containerID="a501539c438a288c43faf127487891f41bc65f932de4f77ec58c8f2b59d4a411" exitCode=0 Oct 04 05:11:02 crc kubenswrapper[4802]: I1004 05:11:02.619285 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e886bcf5-c8ec-465d-87cc-22b905bec5da","Type":"ContainerDied","Data":"a501539c438a288c43faf127487891f41bc65f932de4f77ec58c8f2b59d4a411"} Oct 04 05:11:02 crc kubenswrapper[4802]: I1004 05:11:02.649802 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.649778751 podStartE2EDuration="36.649778751s" podCreationTimestamp="2025-10-04 05:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:11:02.637771579 +0000 UTC m=+1505.045772224" watchObservedRunningTime="2025-10-04 05:11:02.649778751 +0000 UTC m=+1505.057779376" Oct 04 05:11:03 crc kubenswrapper[4802]: I1004 05:11:03.633905 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e886bcf5-c8ec-465d-87cc-22b905bec5da","Type":"ContainerStarted","Data":"6b8f1c3b72f4a768d928ffa403508806664f16c065f000433c5182c5abe1c76d"} Oct 04 05:11:03 crc kubenswrapper[4802]: I1004 05:11:03.634400 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:11:03 crc kubenswrapper[4802]: I1004 05:11:03.665004 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.664964193 podStartE2EDuration="36.664964193s" podCreationTimestamp="2025-10-04 05:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:11:03.657194952 +0000 UTC m=+1506.065195587" watchObservedRunningTime="2025-10-04 05:11:03.664964193 +0000 UTC m=+1506.072964818" Oct 04 05:11:09 crc kubenswrapper[4802]: I1004 05:11:09.310188 4802 scope.go:117] "RemoveContainer" containerID="ac14ea64413095013dde2209208afc2007b001a031ab19bcb37134490027462d" Oct 04 05:11:11 crc kubenswrapper[4802]: I1004 05:11:11.742717 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" event={"ID":"5635be3d-08e4-4fd2-b3e4-488dda21dce7","Type":"ContainerStarted","Data":"ec3d486ccc9246d6c5f55b26f18cbb62caed517d7e1299a83099b7b8a4001c5f"} Oct 04 05:11:11 crc kubenswrapper[4802]: I1004 05:11:11.758691 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" podStartSLOduration=2.038148981 podStartE2EDuration="10.758674604s" podCreationTimestamp="2025-10-04 05:11:01 +0000 UTC" firstStartedPulling="2025-10-04 05:11:02.266363803 +0000 UTC m=+1504.674364428" lastFinishedPulling="2025-10-04 05:11:10.986889426 +0000 UTC m=+1513.394890051" observedRunningTime="2025-10-04 05:11:11.756468811 +0000 UTC m=+1514.164469436" watchObservedRunningTime="2025-10-04 05:11:11.758674604 +0000 UTC m=+1514.166675229" Oct 04 05:11:16 crc kubenswrapper[4802]: I1004 05:11:16.685814 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 04 05:11:17 crc kubenswrapper[4802]: I1004 05:11:17.666858 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 04 05:11:22 crc kubenswrapper[4802]: I1004 05:11:22.662461 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:11:22 crc kubenswrapper[4802]: I1004 05:11:22.663169 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:11:22 crc kubenswrapper[4802]: I1004 05:11:22.663224 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 05:11:22 crc kubenswrapper[4802]: I1004 05:11:22.664525 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b26301f92c6ff409155d12712a68269dd9751a178e6afc83d2a6f8069fd1f8e"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 05:11:22 crc kubenswrapper[4802]: I1004 05:11:22.664590 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://5b26301f92c6ff409155d12712a68269dd9751a178e6afc83d2a6f8069fd1f8e" gracePeriod=600 Oct 04 05:11:22 crc kubenswrapper[4802]: I1004 05:11:22.848221 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="5b26301f92c6ff409155d12712a68269dd9751a178e6afc83d2a6f8069fd1f8e" exitCode=0 Oct 04 05:11:22 crc kubenswrapper[4802]: I1004 05:11:22.848564 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"5b26301f92c6ff409155d12712a68269dd9751a178e6afc83d2a6f8069fd1f8e"} Oct 04 05:11:22 crc kubenswrapper[4802]: I1004 05:11:22.848601 4802 scope.go:117] "RemoveContainer" containerID="2c8c1e44715835d6ef2d00db5cee02bc888c676507b5f91dafd169007af48bd8" Oct 04 05:11:23 crc kubenswrapper[4802]: I1004 05:11:23.861051 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5"} Oct 04 05:11:24 crc kubenswrapper[4802]: I1004 05:11:24.160258 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4fmzh"] Oct 04 05:11:24 crc kubenswrapper[4802]: I1004 05:11:24.164941 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:24 crc kubenswrapper[4802]: I1004 05:11:24.175846 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fmzh"] Oct 04 05:11:24 crc kubenswrapper[4802]: I1004 05:11:24.294632 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-utilities\") pod \"community-operators-4fmzh\" (UID: \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\") " pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:24 crc kubenswrapper[4802]: I1004 05:11:24.294729 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-catalog-content\") pod \"community-operators-4fmzh\" (UID: \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\") " pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:24 crc kubenswrapper[4802]: I1004 05:11:24.294750 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nftm\" (UniqueName: \"kubernetes.io/projected/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-kube-api-access-7nftm\") pod \"community-operators-4fmzh\" (UID: \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\") " pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:24 crc kubenswrapper[4802]: I1004 05:11:24.396176 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-catalog-content\") pod \"community-operators-4fmzh\" (UID: \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\") " pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:24 crc kubenswrapper[4802]: I1004 05:11:24.396882 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nftm\" (UniqueName: \"kubernetes.io/projected/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-kube-api-access-7nftm\") pod \"community-operators-4fmzh\" (UID: \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\") " pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:24 crc kubenswrapper[4802]: I1004 05:11:24.397428 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-utilities\") pod \"community-operators-4fmzh\" (UID: \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\") " pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:24 crc kubenswrapper[4802]: I1004 05:11:24.396849 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-catalog-content\") pod \"community-operators-4fmzh\" (UID: \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\") " pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:24 crc kubenswrapper[4802]: I1004 05:11:24.397775 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-utilities\") pod \"community-operators-4fmzh\" (UID: \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\") " pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:24 crc kubenswrapper[4802]: I1004 05:11:24.430903 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nftm\" (UniqueName: \"kubernetes.io/projected/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-kube-api-access-7nftm\") pod \"community-operators-4fmzh\" (UID: \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\") " pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:24 crc kubenswrapper[4802]: I1004 05:11:24.491028 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:25 crc kubenswrapper[4802]: W1004 05:11:25.005615 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ace0cbf_635b_4e0e_8133_6d7fcc93893d.slice/crio-c6215f8a0ebea1c0b93ff51e6f39857dc4d66ee22b7c26374279dab430ce87ef WatchSource:0}: Error finding container c6215f8a0ebea1c0b93ff51e6f39857dc4d66ee22b7c26374279dab430ce87ef: Status 404 returned error can't find the container with id c6215f8a0ebea1c0b93ff51e6f39857dc4d66ee22b7c26374279dab430ce87ef Oct 04 05:11:25 crc kubenswrapper[4802]: I1004 05:11:25.007583 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fmzh"] Oct 04 05:11:25 crc kubenswrapper[4802]: I1004 05:11:25.886783 4802 generic.go:334] "Generic (PLEG): container finished" podID="5635be3d-08e4-4fd2-b3e4-488dda21dce7" containerID="ec3d486ccc9246d6c5f55b26f18cbb62caed517d7e1299a83099b7b8a4001c5f" exitCode=0 Oct 04 05:11:25 crc kubenswrapper[4802]: I1004 05:11:25.886967 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" event={"ID":"5635be3d-08e4-4fd2-b3e4-488dda21dce7","Type":"ContainerDied","Data":"ec3d486ccc9246d6c5f55b26f18cbb62caed517d7e1299a83099b7b8a4001c5f"} Oct 04 05:11:25 crc kubenswrapper[4802]: I1004 05:11:25.889335 4802 generic.go:334] "Generic (PLEG): container finished" podID="3ace0cbf-635b-4e0e-8133-6d7fcc93893d" containerID="5b2c3c6a34117960a959c7ca4ad00eb7933515516e53e3e89431ed10554d15d2" exitCode=0 Oct 04 05:11:25 crc kubenswrapper[4802]: I1004 05:11:25.889366 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fmzh" event={"ID":"3ace0cbf-635b-4e0e-8133-6d7fcc93893d","Type":"ContainerDied","Data":"5b2c3c6a34117960a959c7ca4ad00eb7933515516e53e3e89431ed10554d15d2"} Oct 04 05:11:25 crc kubenswrapper[4802]: I1004 05:11:25.889416 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fmzh" event={"ID":"3ace0cbf-635b-4e0e-8133-6d7fcc93893d","Type":"ContainerStarted","Data":"c6215f8a0ebea1c0b93ff51e6f39857dc4d66ee22b7c26374279dab430ce87ef"} Oct 04 05:11:26 crc kubenswrapper[4802]: I1004 05:11:26.904168 4802 generic.go:334] "Generic (PLEG): container finished" podID="3ace0cbf-635b-4e0e-8133-6d7fcc93893d" containerID="c46ad7d80819b91f58d7794b42e43ab9a4b538e76d0dc15e153ea47d7f121f0a" exitCode=0 Oct 04 05:11:26 crc kubenswrapper[4802]: I1004 05:11:26.904991 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fmzh" event={"ID":"3ace0cbf-635b-4e0e-8133-6d7fcc93893d","Type":"ContainerDied","Data":"c46ad7d80819b91f58d7794b42e43ab9a4b538e76d0dc15e153ea47d7f121f0a"} Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.289224 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.352528 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-repo-setup-combined-ca-bundle\") pod \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.352736 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnpnv\" (UniqueName: \"kubernetes.io/projected/5635be3d-08e4-4fd2-b3e4-488dda21dce7-kube-api-access-dnpnv\") pod \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.352806 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-inventory\") pod \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.352824 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-ssh-key\") pod \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\" (UID: \"5635be3d-08e4-4fd2-b3e4-488dda21dce7\") " Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.359822 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5635be3d-08e4-4fd2-b3e4-488dda21dce7" (UID: "5635be3d-08e4-4fd2-b3e4-488dda21dce7"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.360101 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5635be3d-08e4-4fd2-b3e4-488dda21dce7-kube-api-access-dnpnv" (OuterVolumeSpecName: "kube-api-access-dnpnv") pod "5635be3d-08e4-4fd2-b3e4-488dda21dce7" (UID: "5635be3d-08e4-4fd2-b3e4-488dda21dce7"). InnerVolumeSpecName "kube-api-access-dnpnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.381284 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5635be3d-08e4-4fd2-b3e4-488dda21dce7" (UID: "5635be3d-08e4-4fd2-b3e4-488dda21dce7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.388248 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-inventory" (OuterVolumeSpecName: "inventory") pod "5635be3d-08e4-4fd2-b3e4-488dda21dce7" (UID: "5635be3d-08e4-4fd2-b3e4-488dda21dce7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.455995 4802 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.456023 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnpnv\" (UniqueName: \"kubernetes.io/projected/5635be3d-08e4-4fd2-b3e4-488dda21dce7-kube-api-access-dnpnv\") on node \"crc\" DevicePath \"\"" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.456033 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.456041 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5635be3d-08e4-4fd2-b3e4-488dda21dce7-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.918659 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fmzh" event={"ID":"3ace0cbf-635b-4e0e-8133-6d7fcc93893d","Type":"ContainerStarted","Data":"b6357a9a6cecc5af6216049ce6630041695a0668a63954059b4632f4680d7026"} Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.921831 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" event={"ID":"5635be3d-08e4-4fd2-b3e4-488dda21dce7","Type":"ContainerDied","Data":"0e3cc234a9e05ad42cdd30937543da4cca41637f607479297aa3b9628dd72724"} Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.921891 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e3cc234a9e05ad42cdd30937543da4cca41637f607479297aa3b9628dd72724" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.921889 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.978255 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln"] Oct 04 05:11:27 crc kubenswrapper[4802]: E1004 05:11:27.978682 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5635be3d-08e4-4fd2-b3e4-488dda21dce7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.978701 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5635be3d-08e4-4fd2-b3e4-488dda21dce7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.978898 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="5635be3d-08e4-4fd2-b3e4-488dda21dce7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.979518 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.981755 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.982123 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.982279 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.982518 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:11:27 crc kubenswrapper[4802]: I1004 05:11:27.994212 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln"] Oct 04 05:11:28 crc kubenswrapper[4802]: I1004 05:11:28.067143 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97w4p\" (UniqueName: \"kubernetes.io/projected/e6992f33-4605-433b-a5c3-6b227ce6cfd2-kube-api-access-97w4p\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:28 crc kubenswrapper[4802]: I1004 05:11:28.067200 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:28 crc kubenswrapper[4802]: I1004 05:11:28.067240 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:28 crc kubenswrapper[4802]: I1004 05:11:28.067351 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:28 crc kubenswrapper[4802]: I1004 05:11:28.168842 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:28 crc kubenswrapper[4802]: I1004 05:11:28.168956 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97w4p\" (UniqueName: \"kubernetes.io/projected/e6992f33-4605-433b-a5c3-6b227ce6cfd2-kube-api-access-97w4p\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:28 crc kubenswrapper[4802]: I1004 05:11:28.168983 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:28 crc kubenswrapper[4802]: I1004 05:11:28.169013 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:28 crc kubenswrapper[4802]: I1004 05:11:28.180822 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:28 crc kubenswrapper[4802]: I1004 05:11:28.181348 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:28 crc kubenswrapper[4802]: I1004 05:11:28.188183 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:28 crc kubenswrapper[4802]: I1004 05:11:28.194360 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97w4p\" (UniqueName: \"kubernetes.io/projected/e6992f33-4605-433b-a5c3-6b227ce6cfd2-kube-api-access-97w4p\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:28 crc kubenswrapper[4802]: I1004 05:11:28.299317 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:11:31 crc kubenswrapper[4802]: I1004 05:11:28.874572 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln"] Oct 04 05:11:31 crc kubenswrapper[4802]: I1004 05:11:28.932308 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" event={"ID":"e6992f33-4605-433b-a5c3-6b227ce6cfd2","Type":"ContainerStarted","Data":"8acd7fafc4c6bd3db7a89bc914796540b8066dbecc3da1829679c8e34edd41ae"} Oct 04 05:11:31 crc kubenswrapper[4802]: I1004 05:11:28.955473 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4fmzh" podStartSLOduration=3.401028384 podStartE2EDuration="4.955453538s" podCreationTimestamp="2025-10-04 05:11:24 +0000 UTC" firstStartedPulling="2025-10-04 05:11:25.891024155 +0000 UTC m=+1528.299024780" lastFinishedPulling="2025-10-04 05:11:27.445449309 +0000 UTC m=+1529.853449934" observedRunningTime="2025-10-04 05:11:28.948756337 +0000 UTC m=+1531.356756972" watchObservedRunningTime="2025-10-04 05:11:28.955453538 +0000 UTC m=+1531.363454163" Oct 04 05:11:31 crc kubenswrapper[4802]: I1004 05:11:30.964616 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" event={"ID":"e6992f33-4605-433b-a5c3-6b227ce6cfd2","Type":"ContainerStarted","Data":"6ae4de67f9b6c15dd8f3e64f74e36474c1c6854bed9e5f615557e1cae9058120"} Oct 04 05:11:31 crc kubenswrapper[4802]: I1004 05:11:31.991037 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" podStartSLOduration=3.666235313 podStartE2EDuration="4.991018347s" podCreationTimestamp="2025-10-04 05:11:27 +0000 UTC" firstStartedPulling="2025-10-04 05:11:28.880198341 +0000 UTC m=+1531.288198966" lastFinishedPulling="2025-10-04 05:11:30.204981375 +0000 UTC m=+1532.612982000" observedRunningTime="2025-10-04 05:11:31.988363321 +0000 UTC m=+1534.396363946" watchObservedRunningTime="2025-10-04 05:11:31.991018347 +0000 UTC m=+1534.399018972" Oct 04 05:11:34 crc kubenswrapper[4802]: I1004 05:11:34.492121 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:34 crc kubenswrapper[4802]: I1004 05:11:34.492382 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:34 crc kubenswrapper[4802]: I1004 05:11:34.564489 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:35 crc kubenswrapper[4802]: I1004 05:11:35.048112 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:35 crc kubenswrapper[4802]: I1004 05:11:35.090412 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fmzh"] Oct 04 05:11:37 crc kubenswrapper[4802]: I1004 05:11:37.018374 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4fmzh" podUID="3ace0cbf-635b-4e0e-8133-6d7fcc93893d" containerName="registry-server" containerID="cri-o://b6357a9a6cecc5af6216049ce6630041695a0668a63954059b4632f4680d7026" gracePeriod=2 Oct 04 05:11:37 crc kubenswrapper[4802]: I1004 05:11:37.464356 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:37 crc kubenswrapper[4802]: I1004 05:11:37.543849 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nftm\" (UniqueName: \"kubernetes.io/projected/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-kube-api-access-7nftm\") pod \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\" (UID: \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\") " Oct 04 05:11:37 crc kubenswrapper[4802]: I1004 05:11:37.544617 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-utilities\") pod \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\" (UID: \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\") " Oct 04 05:11:37 crc kubenswrapper[4802]: I1004 05:11:37.544676 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-catalog-content\") pod \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\" (UID: \"3ace0cbf-635b-4e0e-8133-6d7fcc93893d\") " Oct 04 05:11:37 crc kubenswrapper[4802]: I1004 05:11:37.545951 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-utilities" (OuterVolumeSpecName: "utilities") pod "3ace0cbf-635b-4e0e-8133-6d7fcc93893d" (UID: "3ace0cbf-635b-4e0e-8133-6d7fcc93893d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:11:37 crc kubenswrapper[4802]: I1004 05:11:37.551717 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-kube-api-access-7nftm" (OuterVolumeSpecName: "kube-api-access-7nftm") pod "3ace0cbf-635b-4e0e-8133-6d7fcc93893d" (UID: "3ace0cbf-635b-4e0e-8133-6d7fcc93893d"). InnerVolumeSpecName "kube-api-access-7nftm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:11:37 crc kubenswrapper[4802]: I1004 05:11:37.598934 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ace0cbf-635b-4e0e-8133-6d7fcc93893d" (UID: "3ace0cbf-635b-4e0e-8133-6d7fcc93893d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:11:37 crc kubenswrapper[4802]: I1004 05:11:37.647192 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:11:37 crc kubenswrapper[4802]: I1004 05:11:37.647224 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:11:37 crc kubenswrapper[4802]: I1004 05:11:37.647237 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nftm\" (UniqueName: \"kubernetes.io/projected/3ace0cbf-635b-4e0e-8133-6d7fcc93893d-kube-api-access-7nftm\") on node \"crc\" DevicePath \"\"" Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.029494 4802 generic.go:334] "Generic (PLEG): container finished" podID="3ace0cbf-635b-4e0e-8133-6d7fcc93893d" containerID="b6357a9a6cecc5af6216049ce6630041695a0668a63954059b4632f4680d7026" exitCode=0 Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.029539 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fmzh" event={"ID":"3ace0cbf-635b-4e0e-8133-6d7fcc93893d","Type":"ContainerDied","Data":"b6357a9a6cecc5af6216049ce6630041695a0668a63954059b4632f4680d7026"} Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.029562 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fmzh" event={"ID":"3ace0cbf-635b-4e0e-8133-6d7fcc93893d","Type":"ContainerDied","Data":"c6215f8a0ebea1c0b93ff51e6f39857dc4d66ee22b7c26374279dab430ce87ef"} Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.029566 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fmzh" Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.029580 4802 scope.go:117] "RemoveContainer" containerID="b6357a9a6cecc5af6216049ce6630041695a0668a63954059b4632f4680d7026" Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.056727 4802 scope.go:117] "RemoveContainer" containerID="c46ad7d80819b91f58d7794b42e43ab9a4b538e76d0dc15e153ea47d7f121f0a" Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.070125 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fmzh"] Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.078137 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4fmzh"] Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.107856 4802 scope.go:117] "RemoveContainer" containerID="5b2c3c6a34117960a959c7ca4ad00eb7933515516e53e3e89431ed10554d15d2" Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.125157 4802 scope.go:117] "RemoveContainer" containerID="b6357a9a6cecc5af6216049ce6630041695a0668a63954059b4632f4680d7026" Oct 04 05:11:38 crc kubenswrapper[4802]: E1004 05:11:38.125494 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6357a9a6cecc5af6216049ce6630041695a0668a63954059b4632f4680d7026\": container with ID starting with b6357a9a6cecc5af6216049ce6630041695a0668a63954059b4632f4680d7026 not found: ID does not exist" containerID="b6357a9a6cecc5af6216049ce6630041695a0668a63954059b4632f4680d7026" Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.125527 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6357a9a6cecc5af6216049ce6630041695a0668a63954059b4632f4680d7026"} err="failed to get container status \"b6357a9a6cecc5af6216049ce6630041695a0668a63954059b4632f4680d7026\": rpc error: code = NotFound desc = could not find container \"b6357a9a6cecc5af6216049ce6630041695a0668a63954059b4632f4680d7026\": container with ID starting with b6357a9a6cecc5af6216049ce6630041695a0668a63954059b4632f4680d7026 not found: ID does not exist" Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.125555 4802 scope.go:117] "RemoveContainer" containerID="c46ad7d80819b91f58d7794b42e43ab9a4b538e76d0dc15e153ea47d7f121f0a" Oct 04 05:11:38 crc kubenswrapper[4802]: E1004 05:11:38.125913 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c46ad7d80819b91f58d7794b42e43ab9a4b538e76d0dc15e153ea47d7f121f0a\": container with ID starting with c46ad7d80819b91f58d7794b42e43ab9a4b538e76d0dc15e153ea47d7f121f0a not found: ID does not exist" containerID="c46ad7d80819b91f58d7794b42e43ab9a4b538e76d0dc15e153ea47d7f121f0a" Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.125945 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c46ad7d80819b91f58d7794b42e43ab9a4b538e76d0dc15e153ea47d7f121f0a"} err="failed to get container status \"c46ad7d80819b91f58d7794b42e43ab9a4b538e76d0dc15e153ea47d7f121f0a\": rpc error: code = NotFound desc = could not find container \"c46ad7d80819b91f58d7794b42e43ab9a4b538e76d0dc15e153ea47d7f121f0a\": container with ID starting with c46ad7d80819b91f58d7794b42e43ab9a4b538e76d0dc15e153ea47d7f121f0a not found: ID does not exist" Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.125961 4802 scope.go:117] "RemoveContainer" containerID="5b2c3c6a34117960a959c7ca4ad00eb7933515516e53e3e89431ed10554d15d2" Oct 04 05:11:38 crc kubenswrapper[4802]: E1004 05:11:38.126176 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2c3c6a34117960a959c7ca4ad00eb7933515516e53e3e89431ed10554d15d2\": container with ID starting with 5b2c3c6a34117960a959c7ca4ad00eb7933515516e53e3e89431ed10554d15d2 not found: ID does not exist" containerID="5b2c3c6a34117960a959c7ca4ad00eb7933515516e53e3e89431ed10554d15d2" Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.126195 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2c3c6a34117960a959c7ca4ad00eb7933515516e53e3e89431ed10554d15d2"} err="failed to get container status \"5b2c3c6a34117960a959c7ca4ad00eb7933515516e53e3e89431ed10554d15d2\": rpc error: code = NotFound desc = could not find container \"5b2c3c6a34117960a959c7ca4ad00eb7933515516e53e3e89431ed10554d15d2\": container with ID starting with 5b2c3c6a34117960a959c7ca4ad00eb7933515516e53e3e89431ed10554d15d2 not found: ID does not exist" Oct 04 05:11:38 crc kubenswrapper[4802]: I1004 05:11:38.368969 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ace0cbf-635b-4e0e-8133-6d7fcc93893d" path="/var/lib/kubelet/pods/3ace0cbf-635b-4e0e-8133-6d7fcc93893d/volumes" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.545187 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5r76w"] Oct 04 05:11:56 crc kubenswrapper[4802]: E1004 05:11:56.546084 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ace0cbf-635b-4e0e-8133-6d7fcc93893d" containerName="extract-content" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.546097 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ace0cbf-635b-4e0e-8133-6d7fcc93893d" containerName="extract-content" Oct 04 05:11:56 crc kubenswrapper[4802]: E1004 05:11:56.546120 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ace0cbf-635b-4e0e-8133-6d7fcc93893d" containerName="registry-server" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.546126 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ace0cbf-635b-4e0e-8133-6d7fcc93893d" containerName="registry-server" Oct 04 05:11:56 crc kubenswrapper[4802]: E1004 05:11:56.546156 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ace0cbf-635b-4e0e-8133-6d7fcc93893d" containerName="extract-utilities" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.546162 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ace0cbf-635b-4e0e-8133-6d7fcc93893d" containerName="extract-utilities" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.546377 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ace0cbf-635b-4e0e-8133-6d7fcc93893d" containerName="registry-server" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.547722 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.556016 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5r76w"] Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.583268 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39af91c9-cd5d-4307-bf42-0737ec18fbf9-utilities\") pod \"certified-operators-5r76w\" (UID: \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\") " pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.583317 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn7rs\" (UniqueName: \"kubernetes.io/projected/39af91c9-cd5d-4307-bf42-0737ec18fbf9-kube-api-access-rn7rs\") pod \"certified-operators-5r76w\" (UID: \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\") " pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.583371 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39af91c9-cd5d-4307-bf42-0737ec18fbf9-catalog-content\") pod \"certified-operators-5r76w\" (UID: \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\") " pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.685246 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39af91c9-cd5d-4307-bf42-0737ec18fbf9-utilities\") pod \"certified-operators-5r76w\" (UID: \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\") " pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.685625 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn7rs\" (UniqueName: \"kubernetes.io/projected/39af91c9-cd5d-4307-bf42-0737ec18fbf9-kube-api-access-rn7rs\") pod \"certified-operators-5r76w\" (UID: \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\") " pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.685701 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39af91c9-cd5d-4307-bf42-0737ec18fbf9-catalog-content\") pod \"certified-operators-5r76w\" (UID: \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\") " pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.685879 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39af91c9-cd5d-4307-bf42-0737ec18fbf9-utilities\") pod \"certified-operators-5r76w\" (UID: \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\") " pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.687994 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39af91c9-cd5d-4307-bf42-0737ec18fbf9-catalog-content\") pod \"certified-operators-5r76w\" (UID: \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\") " pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.730092 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn7rs\" (UniqueName: \"kubernetes.io/projected/39af91c9-cd5d-4307-bf42-0737ec18fbf9-kube-api-access-rn7rs\") pod \"certified-operators-5r76w\" (UID: \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\") " pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:11:56 crc kubenswrapper[4802]: I1004 05:11:56.877745 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:11:57 crc kubenswrapper[4802]: I1004 05:11:57.457985 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5r76w"] Oct 04 05:11:58 crc kubenswrapper[4802]: I1004 05:11:58.198701 4802 generic.go:334] "Generic (PLEG): container finished" podID="39af91c9-cd5d-4307-bf42-0737ec18fbf9" containerID="96c75b470cf8e85fdcafdf043f72fb681454f5ad4e34fb6b58f6aff47318c044" exitCode=0 Oct 04 05:11:58 crc kubenswrapper[4802]: I1004 05:11:58.198743 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5r76w" event={"ID":"39af91c9-cd5d-4307-bf42-0737ec18fbf9","Type":"ContainerDied","Data":"96c75b470cf8e85fdcafdf043f72fb681454f5ad4e34fb6b58f6aff47318c044"} Oct 04 05:11:58 crc kubenswrapper[4802]: I1004 05:11:58.198766 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5r76w" event={"ID":"39af91c9-cd5d-4307-bf42-0737ec18fbf9","Type":"ContainerStarted","Data":"712ca97cf09ea8e74d08c78dd9054feee73c68d2a915f899613966994993e941"} Oct 04 05:11:59 crc kubenswrapper[4802]: I1004 05:11:59.209930 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5r76w" event={"ID":"39af91c9-cd5d-4307-bf42-0737ec18fbf9","Type":"ContainerStarted","Data":"65a3e54968441bdc24a178e688d8ca9d6b3b1cea67dfd2be8b3923464d31387c"} Oct 04 05:12:00 crc kubenswrapper[4802]: I1004 05:12:00.219811 4802 generic.go:334] "Generic (PLEG): container finished" podID="39af91c9-cd5d-4307-bf42-0737ec18fbf9" containerID="65a3e54968441bdc24a178e688d8ca9d6b3b1cea67dfd2be8b3923464d31387c" exitCode=0 Oct 04 05:12:00 crc kubenswrapper[4802]: I1004 05:12:00.220150 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5r76w" event={"ID":"39af91c9-cd5d-4307-bf42-0737ec18fbf9","Type":"ContainerDied","Data":"65a3e54968441bdc24a178e688d8ca9d6b3b1cea67dfd2be8b3923464d31387c"} Oct 04 05:12:01 crc kubenswrapper[4802]: I1004 05:12:01.230032 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5r76w" event={"ID":"39af91c9-cd5d-4307-bf42-0737ec18fbf9","Type":"ContainerStarted","Data":"68e659b0b391947d2736177c63ed4849f3641a49b27cbeaace750def0cfaa771"} Oct 04 05:12:01 crc kubenswrapper[4802]: I1004 05:12:01.255686 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5r76w" podStartSLOduration=2.818195597 podStartE2EDuration="5.255667504s" podCreationTimestamp="2025-10-04 05:11:56 +0000 UTC" firstStartedPulling="2025-10-04 05:11:58.203254304 +0000 UTC m=+1560.611254929" lastFinishedPulling="2025-10-04 05:12:00.640726211 +0000 UTC m=+1563.048726836" observedRunningTime="2025-10-04 05:12:01.248740697 +0000 UTC m=+1563.656741352" watchObservedRunningTime="2025-10-04 05:12:01.255667504 +0000 UTC m=+1563.663668129" Oct 04 05:12:06 crc kubenswrapper[4802]: I1004 05:12:06.878569 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:12:06 crc kubenswrapper[4802]: I1004 05:12:06.879198 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:12:06 crc kubenswrapper[4802]: I1004 05:12:06.926221 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:12:07 crc kubenswrapper[4802]: I1004 05:12:07.342538 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:12:07 crc kubenswrapper[4802]: I1004 05:12:07.392142 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5r76w"] Oct 04 05:12:09 crc kubenswrapper[4802]: I1004 05:12:09.314443 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5r76w" podUID="39af91c9-cd5d-4307-bf42-0737ec18fbf9" containerName="registry-server" containerID="cri-o://68e659b0b391947d2736177c63ed4849f3641a49b27cbeaace750def0cfaa771" gracePeriod=2 Oct 04 05:12:09 crc kubenswrapper[4802]: I1004 05:12:09.771233 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:12:09 crc kubenswrapper[4802]: I1004 05:12:09.817458 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39af91c9-cd5d-4307-bf42-0737ec18fbf9-utilities\") pod \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\" (UID: \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\") " Oct 04 05:12:09 crc kubenswrapper[4802]: I1004 05:12:09.817743 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39af91c9-cd5d-4307-bf42-0737ec18fbf9-catalog-content\") pod \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\" (UID: \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\") " Oct 04 05:12:09 crc kubenswrapper[4802]: I1004 05:12:09.817796 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn7rs\" (UniqueName: \"kubernetes.io/projected/39af91c9-cd5d-4307-bf42-0737ec18fbf9-kube-api-access-rn7rs\") pod \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\" (UID: \"39af91c9-cd5d-4307-bf42-0737ec18fbf9\") " Oct 04 05:12:09 crc kubenswrapper[4802]: I1004 05:12:09.818729 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39af91c9-cd5d-4307-bf42-0737ec18fbf9-utilities" (OuterVolumeSpecName: "utilities") pod "39af91c9-cd5d-4307-bf42-0737ec18fbf9" (UID: "39af91c9-cd5d-4307-bf42-0737ec18fbf9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:12:09 crc kubenswrapper[4802]: I1004 05:12:09.823920 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39af91c9-cd5d-4307-bf42-0737ec18fbf9-kube-api-access-rn7rs" (OuterVolumeSpecName: "kube-api-access-rn7rs") pod "39af91c9-cd5d-4307-bf42-0737ec18fbf9" (UID: "39af91c9-cd5d-4307-bf42-0737ec18fbf9"). InnerVolumeSpecName "kube-api-access-rn7rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:12:09 crc kubenswrapper[4802]: I1004 05:12:09.920200 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39af91c9-cd5d-4307-bf42-0737ec18fbf9-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:12:09 crc kubenswrapper[4802]: I1004 05:12:09.920233 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn7rs\" (UniqueName: \"kubernetes.io/projected/39af91c9-cd5d-4307-bf42-0737ec18fbf9-kube-api-access-rn7rs\") on node \"crc\" DevicePath \"\"" Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.327781 4802 generic.go:334] "Generic (PLEG): container finished" podID="39af91c9-cd5d-4307-bf42-0737ec18fbf9" containerID="68e659b0b391947d2736177c63ed4849f3641a49b27cbeaace750def0cfaa771" exitCode=0 Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.327831 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5r76w" event={"ID":"39af91c9-cd5d-4307-bf42-0737ec18fbf9","Type":"ContainerDied","Data":"68e659b0b391947d2736177c63ed4849f3641a49b27cbeaace750def0cfaa771"} Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.327854 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5r76w" Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.327874 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5r76w" event={"ID":"39af91c9-cd5d-4307-bf42-0737ec18fbf9","Type":"ContainerDied","Data":"712ca97cf09ea8e74d08c78dd9054feee73c68d2a915f899613966994993e941"} Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.327893 4802 scope.go:117] "RemoveContainer" containerID="68e659b0b391947d2736177c63ed4849f3641a49b27cbeaace750def0cfaa771" Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.348266 4802 scope.go:117] "RemoveContainer" containerID="65a3e54968441bdc24a178e688d8ca9d6b3b1cea67dfd2be8b3923464d31387c" Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.370433 4802 scope.go:117] "RemoveContainer" containerID="96c75b470cf8e85fdcafdf043f72fb681454f5ad4e34fb6b58f6aff47318c044" Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.421011 4802 scope.go:117] "RemoveContainer" containerID="68e659b0b391947d2736177c63ed4849f3641a49b27cbeaace750def0cfaa771" Oct 04 05:12:10 crc kubenswrapper[4802]: E1004 05:12:10.421493 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68e659b0b391947d2736177c63ed4849f3641a49b27cbeaace750def0cfaa771\": container with ID starting with 68e659b0b391947d2736177c63ed4849f3641a49b27cbeaace750def0cfaa771 not found: ID does not exist" containerID="68e659b0b391947d2736177c63ed4849f3641a49b27cbeaace750def0cfaa771" Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.421538 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68e659b0b391947d2736177c63ed4849f3641a49b27cbeaace750def0cfaa771"} err="failed to get container status \"68e659b0b391947d2736177c63ed4849f3641a49b27cbeaace750def0cfaa771\": rpc error: code = NotFound desc = could not find container \"68e659b0b391947d2736177c63ed4849f3641a49b27cbeaace750def0cfaa771\": container with ID starting with 68e659b0b391947d2736177c63ed4849f3641a49b27cbeaace750def0cfaa771 not found: ID does not exist" Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.421564 4802 scope.go:117] "RemoveContainer" containerID="65a3e54968441bdc24a178e688d8ca9d6b3b1cea67dfd2be8b3923464d31387c" Oct 04 05:12:10 crc kubenswrapper[4802]: E1004 05:12:10.422039 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65a3e54968441bdc24a178e688d8ca9d6b3b1cea67dfd2be8b3923464d31387c\": container with ID starting with 65a3e54968441bdc24a178e688d8ca9d6b3b1cea67dfd2be8b3923464d31387c not found: ID does not exist" containerID="65a3e54968441bdc24a178e688d8ca9d6b3b1cea67dfd2be8b3923464d31387c" Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.422145 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a3e54968441bdc24a178e688d8ca9d6b3b1cea67dfd2be8b3923464d31387c"} err="failed to get container status \"65a3e54968441bdc24a178e688d8ca9d6b3b1cea67dfd2be8b3923464d31387c\": rpc error: code = NotFound desc = could not find container \"65a3e54968441bdc24a178e688d8ca9d6b3b1cea67dfd2be8b3923464d31387c\": container with ID starting with 65a3e54968441bdc24a178e688d8ca9d6b3b1cea67dfd2be8b3923464d31387c not found: ID does not exist" Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.422224 4802 scope.go:117] "RemoveContainer" containerID="96c75b470cf8e85fdcafdf043f72fb681454f5ad4e34fb6b58f6aff47318c044" Oct 04 05:12:10 crc kubenswrapper[4802]: E1004 05:12:10.422561 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c75b470cf8e85fdcafdf043f72fb681454f5ad4e34fb6b58f6aff47318c044\": container with ID starting with 96c75b470cf8e85fdcafdf043f72fb681454f5ad4e34fb6b58f6aff47318c044 not found: ID does not exist" containerID="96c75b470cf8e85fdcafdf043f72fb681454f5ad4e34fb6b58f6aff47318c044" Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.422638 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c75b470cf8e85fdcafdf043f72fb681454f5ad4e34fb6b58f6aff47318c044"} err="failed to get container status \"96c75b470cf8e85fdcafdf043f72fb681454f5ad4e34fb6b58f6aff47318c044\": rpc error: code = NotFound desc = could not find container \"96c75b470cf8e85fdcafdf043f72fb681454f5ad4e34fb6b58f6aff47318c044\": container with ID starting with 96c75b470cf8e85fdcafdf043f72fb681454f5ad4e34fb6b58f6aff47318c044 not found: ID does not exist" Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.467728 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39af91c9-cd5d-4307-bf42-0737ec18fbf9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39af91c9-cd5d-4307-bf42-0737ec18fbf9" (UID: "39af91c9-cd5d-4307-bf42-0737ec18fbf9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.532773 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39af91c9-cd5d-4307-bf42-0737ec18fbf9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.666840 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5r76w"] Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.693327 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5r76w"] Oct 04 05:12:10 crc kubenswrapper[4802]: I1004 05:12:10.990346 4802 scope.go:117] "RemoveContainer" containerID="b192a3822d068e1a924e109c8acc06fa28df0f485892aa2f0bc1c29b137c2f35" Oct 04 05:12:11 crc kubenswrapper[4802]: I1004 05:12:11.031140 4802 scope.go:117] "RemoveContainer" containerID="60456b7a1a9871496824f0469c232d699d368b16662160fb332db0b69b7018af" Oct 04 05:12:11 crc kubenswrapper[4802]: I1004 05:12:11.062896 4802 scope.go:117] "RemoveContainer" containerID="901b85df8417b0551c3d1a54333431ac985324da78e35e3b6af6afaf4bb3b8bc" Oct 04 05:12:12 crc kubenswrapper[4802]: I1004 05:12:12.371943 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39af91c9-cd5d-4307-bf42-0737ec18fbf9" path="/var/lib/kubelet/pods/39af91c9-cd5d-4307-bf42-0737ec18fbf9/volumes" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.632074 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bhk6n"] Oct 04 05:12:28 crc kubenswrapper[4802]: E1004 05:12:28.633052 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39af91c9-cd5d-4307-bf42-0737ec18fbf9" containerName="extract-utilities" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.633070 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="39af91c9-cd5d-4307-bf42-0737ec18fbf9" containerName="extract-utilities" Oct 04 05:12:28 crc kubenswrapper[4802]: E1004 05:12:28.633093 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39af91c9-cd5d-4307-bf42-0737ec18fbf9" containerName="extract-content" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.633101 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="39af91c9-cd5d-4307-bf42-0737ec18fbf9" containerName="extract-content" Oct 04 05:12:28 crc kubenswrapper[4802]: E1004 05:12:28.633123 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39af91c9-cd5d-4307-bf42-0737ec18fbf9" containerName="registry-server" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.633131 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="39af91c9-cd5d-4307-bf42-0737ec18fbf9" containerName="registry-server" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.633372 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="39af91c9-cd5d-4307-bf42-0737ec18fbf9" containerName="registry-server" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.635030 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.639668 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhk6n"] Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.771342 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c881409-ff86-4703-92ae-823b4d95e7c9-catalog-content\") pod \"redhat-marketplace-bhk6n\" (UID: \"4c881409-ff86-4703-92ae-823b4d95e7c9\") " pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.771420 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c881409-ff86-4703-92ae-823b4d95e7c9-utilities\") pod \"redhat-marketplace-bhk6n\" (UID: \"4c881409-ff86-4703-92ae-823b4d95e7c9\") " pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.772079 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svrbr\" (UniqueName: \"kubernetes.io/projected/4c881409-ff86-4703-92ae-823b4d95e7c9-kube-api-access-svrbr\") pod \"redhat-marketplace-bhk6n\" (UID: \"4c881409-ff86-4703-92ae-823b4d95e7c9\") " pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.873898 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svrbr\" (UniqueName: \"kubernetes.io/projected/4c881409-ff86-4703-92ae-823b4d95e7c9-kube-api-access-svrbr\") pod \"redhat-marketplace-bhk6n\" (UID: \"4c881409-ff86-4703-92ae-823b4d95e7c9\") " pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.874017 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c881409-ff86-4703-92ae-823b4d95e7c9-catalog-content\") pod \"redhat-marketplace-bhk6n\" (UID: \"4c881409-ff86-4703-92ae-823b4d95e7c9\") " pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.874058 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c881409-ff86-4703-92ae-823b4d95e7c9-utilities\") pod \"redhat-marketplace-bhk6n\" (UID: \"4c881409-ff86-4703-92ae-823b4d95e7c9\") " pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.874487 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c881409-ff86-4703-92ae-823b4d95e7c9-utilities\") pod \"redhat-marketplace-bhk6n\" (UID: \"4c881409-ff86-4703-92ae-823b4d95e7c9\") " pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.874681 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c881409-ff86-4703-92ae-823b4d95e7c9-catalog-content\") pod \"redhat-marketplace-bhk6n\" (UID: \"4c881409-ff86-4703-92ae-823b4d95e7c9\") " pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.903751 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svrbr\" (UniqueName: \"kubernetes.io/projected/4c881409-ff86-4703-92ae-823b4d95e7c9-kube-api-access-svrbr\") pod \"redhat-marketplace-bhk6n\" (UID: \"4c881409-ff86-4703-92ae-823b4d95e7c9\") " pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:28 crc kubenswrapper[4802]: I1004 05:12:28.954775 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:29 crc kubenswrapper[4802]: I1004 05:12:29.436248 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhk6n"] Oct 04 05:12:29 crc kubenswrapper[4802]: I1004 05:12:29.505810 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhk6n" event={"ID":"4c881409-ff86-4703-92ae-823b4d95e7c9","Type":"ContainerStarted","Data":"e8f9e2b80dab9077c77f2479296787a11b05a85a31f35ce2c8b8ee4d222c3a27"} Oct 04 05:12:30 crc kubenswrapper[4802]: I1004 05:12:30.514804 4802 generic.go:334] "Generic (PLEG): container finished" podID="4c881409-ff86-4703-92ae-823b4d95e7c9" containerID="a266aa8c1fe04a10eefee841228a4234983b7f458a4e08a0d6f104e56bd775f2" exitCode=0 Oct 04 05:12:30 crc kubenswrapper[4802]: I1004 05:12:30.515044 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhk6n" event={"ID":"4c881409-ff86-4703-92ae-823b4d95e7c9","Type":"ContainerDied","Data":"a266aa8c1fe04a10eefee841228a4234983b7f458a4e08a0d6f104e56bd775f2"} Oct 04 05:12:32 crc kubenswrapper[4802]: I1004 05:12:32.533612 4802 generic.go:334] "Generic (PLEG): container finished" podID="4c881409-ff86-4703-92ae-823b4d95e7c9" containerID="fb9fbf4e540faf162471a94b10b5b5234da98bf8fcd62bd1971240d2269b2371" exitCode=0 Oct 04 05:12:32 crc kubenswrapper[4802]: I1004 05:12:32.533688 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhk6n" event={"ID":"4c881409-ff86-4703-92ae-823b4d95e7c9","Type":"ContainerDied","Data":"fb9fbf4e540faf162471a94b10b5b5234da98bf8fcd62bd1971240d2269b2371"} Oct 04 05:12:33 crc kubenswrapper[4802]: I1004 05:12:33.546840 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhk6n" event={"ID":"4c881409-ff86-4703-92ae-823b4d95e7c9","Type":"ContainerStarted","Data":"243f1f7f89a9c96331a7deec5485d848de4790812e1ad94f197aade400a73d0f"} Oct 04 05:12:38 crc kubenswrapper[4802]: I1004 05:12:38.954933 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:38 crc kubenswrapper[4802]: I1004 05:12:38.957302 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:39 crc kubenswrapper[4802]: I1004 05:12:39.004576 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:39 crc kubenswrapper[4802]: I1004 05:12:39.023798 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bhk6n" podStartSLOduration=8.586604302 podStartE2EDuration="11.02377477s" podCreationTimestamp="2025-10-04 05:12:28 +0000 UTC" firstStartedPulling="2025-10-04 05:12:30.51678731 +0000 UTC m=+1592.924787935" lastFinishedPulling="2025-10-04 05:12:32.953957778 +0000 UTC m=+1595.361958403" observedRunningTime="2025-10-04 05:12:33.581207093 +0000 UTC m=+1595.989207728" watchObservedRunningTime="2025-10-04 05:12:39.02377477 +0000 UTC m=+1601.431775415" Oct 04 05:12:39 crc kubenswrapper[4802]: I1004 05:12:39.640994 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:39 crc kubenswrapper[4802]: I1004 05:12:39.698561 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhk6n"] Oct 04 05:12:41 crc kubenswrapper[4802]: I1004 05:12:41.616783 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bhk6n" podUID="4c881409-ff86-4703-92ae-823b4d95e7c9" containerName="registry-server" containerID="cri-o://243f1f7f89a9c96331a7deec5485d848de4790812e1ad94f197aade400a73d0f" gracePeriod=2 Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.065209 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.146218 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c881409-ff86-4703-92ae-823b4d95e7c9-catalog-content\") pod \"4c881409-ff86-4703-92ae-823b4d95e7c9\" (UID: \"4c881409-ff86-4703-92ae-823b4d95e7c9\") " Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.146294 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svrbr\" (UniqueName: \"kubernetes.io/projected/4c881409-ff86-4703-92ae-823b4d95e7c9-kube-api-access-svrbr\") pod \"4c881409-ff86-4703-92ae-823b4d95e7c9\" (UID: \"4c881409-ff86-4703-92ae-823b4d95e7c9\") " Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.146481 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c881409-ff86-4703-92ae-823b4d95e7c9-utilities\") pod \"4c881409-ff86-4703-92ae-823b4d95e7c9\" (UID: \"4c881409-ff86-4703-92ae-823b4d95e7c9\") " Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.147596 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c881409-ff86-4703-92ae-823b4d95e7c9-utilities" (OuterVolumeSpecName: "utilities") pod "4c881409-ff86-4703-92ae-823b4d95e7c9" (UID: "4c881409-ff86-4703-92ae-823b4d95e7c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.152391 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c881409-ff86-4703-92ae-823b4d95e7c9-kube-api-access-svrbr" (OuterVolumeSpecName: "kube-api-access-svrbr") pod "4c881409-ff86-4703-92ae-823b4d95e7c9" (UID: "4c881409-ff86-4703-92ae-823b4d95e7c9"). InnerVolumeSpecName "kube-api-access-svrbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.161627 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c881409-ff86-4703-92ae-823b4d95e7c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c881409-ff86-4703-92ae-823b4d95e7c9" (UID: "4c881409-ff86-4703-92ae-823b4d95e7c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.249049 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c881409-ff86-4703-92ae-823b4d95e7c9-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.249099 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c881409-ff86-4703-92ae-823b4d95e7c9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.249110 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svrbr\" (UniqueName: \"kubernetes.io/projected/4c881409-ff86-4703-92ae-823b4d95e7c9-kube-api-access-svrbr\") on node \"crc\" DevicePath \"\"" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.638895 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhk6n" event={"ID":"4c881409-ff86-4703-92ae-823b4d95e7c9","Type":"ContainerDied","Data":"243f1f7f89a9c96331a7deec5485d848de4790812e1ad94f197aade400a73d0f"} Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.638949 4802 scope.go:117] "RemoveContainer" containerID="243f1f7f89a9c96331a7deec5485d848de4790812e1ad94f197aade400a73d0f" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.639043 4802 generic.go:334] "Generic (PLEG): container finished" podID="4c881409-ff86-4703-92ae-823b4d95e7c9" containerID="243f1f7f89a9c96331a7deec5485d848de4790812e1ad94f197aade400a73d0f" exitCode=0 Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.639070 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhk6n" event={"ID":"4c881409-ff86-4703-92ae-823b4d95e7c9","Type":"ContainerDied","Data":"e8f9e2b80dab9077c77f2479296787a11b05a85a31f35ce2c8b8ee4d222c3a27"} Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.639148 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhk6n" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.664160 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhk6n"] Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.668510 4802 scope.go:117] "RemoveContainer" containerID="fb9fbf4e540faf162471a94b10b5b5234da98bf8fcd62bd1971240d2269b2371" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.673795 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhk6n"] Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.687111 4802 scope.go:117] "RemoveContainer" containerID="a266aa8c1fe04a10eefee841228a4234983b7f458a4e08a0d6f104e56bd775f2" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.733122 4802 scope.go:117] "RemoveContainer" containerID="243f1f7f89a9c96331a7deec5485d848de4790812e1ad94f197aade400a73d0f" Oct 04 05:12:42 crc kubenswrapper[4802]: E1004 05:12:42.733697 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"243f1f7f89a9c96331a7deec5485d848de4790812e1ad94f197aade400a73d0f\": container with ID starting with 243f1f7f89a9c96331a7deec5485d848de4790812e1ad94f197aade400a73d0f not found: ID does not exist" containerID="243f1f7f89a9c96331a7deec5485d848de4790812e1ad94f197aade400a73d0f" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.733752 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"243f1f7f89a9c96331a7deec5485d848de4790812e1ad94f197aade400a73d0f"} err="failed to get container status \"243f1f7f89a9c96331a7deec5485d848de4790812e1ad94f197aade400a73d0f\": rpc error: code = NotFound desc = could not find container \"243f1f7f89a9c96331a7deec5485d848de4790812e1ad94f197aade400a73d0f\": container with ID starting with 243f1f7f89a9c96331a7deec5485d848de4790812e1ad94f197aade400a73d0f not found: ID does not exist" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.733784 4802 scope.go:117] "RemoveContainer" containerID="fb9fbf4e540faf162471a94b10b5b5234da98bf8fcd62bd1971240d2269b2371" Oct 04 05:12:42 crc kubenswrapper[4802]: E1004 05:12:42.734312 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb9fbf4e540faf162471a94b10b5b5234da98bf8fcd62bd1971240d2269b2371\": container with ID starting with fb9fbf4e540faf162471a94b10b5b5234da98bf8fcd62bd1971240d2269b2371 not found: ID does not exist" containerID="fb9fbf4e540faf162471a94b10b5b5234da98bf8fcd62bd1971240d2269b2371" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.734425 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb9fbf4e540faf162471a94b10b5b5234da98bf8fcd62bd1971240d2269b2371"} err="failed to get container status \"fb9fbf4e540faf162471a94b10b5b5234da98bf8fcd62bd1971240d2269b2371\": rpc error: code = NotFound desc = could not find container \"fb9fbf4e540faf162471a94b10b5b5234da98bf8fcd62bd1971240d2269b2371\": container with ID starting with fb9fbf4e540faf162471a94b10b5b5234da98bf8fcd62bd1971240d2269b2371 not found: ID does not exist" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.734524 4802 scope.go:117] "RemoveContainer" containerID="a266aa8c1fe04a10eefee841228a4234983b7f458a4e08a0d6f104e56bd775f2" Oct 04 05:12:42 crc kubenswrapper[4802]: E1004 05:12:42.734917 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a266aa8c1fe04a10eefee841228a4234983b7f458a4e08a0d6f104e56bd775f2\": container with ID starting with a266aa8c1fe04a10eefee841228a4234983b7f458a4e08a0d6f104e56bd775f2 not found: ID does not exist" containerID="a266aa8c1fe04a10eefee841228a4234983b7f458a4e08a0d6f104e56bd775f2" Oct 04 05:12:42 crc kubenswrapper[4802]: I1004 05:12:42.734954 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a266aa8c1fe04a10eefee841228a4234983b7f458a4e08a0d6f104e56bd775f2"} err="failed to get container status \"a266aa8c1fe04a10eefee841228a4234983b7f458a4e08a0d6f104e56bd775f2\": rpc error: code = NotFound desc = could not find container \"a266aa8c1fe04a10eefee841228a4234983b7f458a4e08a0d6f104e56bd775f2\": container with ID starting with a266aa8c1fe04a10eefee841228a4234983b7f458a4e08a0d6f104e56bd775f2 not found: ID does not exist" Oct 04 05:12:44 crc kubenswrapper[4802]: I1004 05:12:44.369156 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c881409-ff86-4703-92ae-823b4d95e7c9" path="/var/lib/kubelet/pods/4c881409-ff86-4703-92ae-823b4d95e7c9/volumes" Oct 04 05:13:11 crc kubenswrapper[4802]: I1004 05:13:11.187770 4802 scope.go:117] "RemoveContainer" containerID="9b52750d4d5be522242a54f35a91878568173e4ea0ea28bcccf1fc3b5c689179" Oct 04 05:13:11 crc kubenswrapper[4802]: I1004 05:13:11.209874 4802 scope.go:117] "RemoveContainer" containerID="1201e702e79a5e472fcd74e8b0c30e18570bb160af864fb399a2f90de454d671" Oct 04 05:13:11 crc kubenswrapper[4802]: I1004 05:13:11.243320 4802 scope.go:117] "RemoveContainer" containerID="5e6f675210f2c7e5c66e6b93525b15b615bc9b84c7e973d543604550e82acc0a" Oct 04 05:13:11 crc kubenswrapper[4802]: I1004 05:13:11.271316 4802 scope.go:117] "RemoveContainer" containerID="7d203a3fb8c5895bf131c4ec74184353486aa9a470cba3ded4cada90689dde01" Oct 04 05:13:11 crc kubenswrapper[4802]: I1004 05:13:11.289124 4802 scope.go:117] "RemoveContainer" containerID="defb3fb5d3c680049f0af81ed7d55c8770ceee4531d8f3045846c25075e12384" Oct 04 05:13:52 crc kubenswrapper[4802]: I1004 05:13:52.662242 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:13:52 crc kubenswrapper[4802]: I1004 05:13:52.662781 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:14:11 crc kubenswrapper[4802]: I1004 05:14:11.376490 4802 scope.go:117] "RemoveContainer" containerID="a02087030babb55b0180bed7d7abd0610837690bb3c926df0131f6f0fce452fd" Oct 04 05:14:11 crc kubenswrapper[4802]: I1004 05:14:11.399928 4802 scope.go:117] "RemoveContainer" containerID="4f3893a12e535b523b35967ab1357d6ba18a8aba70042d322aab537368ba05b7" Oct 04 05:14:22 crc kubenswrapper[4802]: I1004 05:14:22.662318 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:14:22 crc kubenswrapper[4802]: I1004 05:14:22.662936 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:14:52 crc kubenswrapper[4802]: I1004 05:14:52.664188 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:14:52 crc kubenswrapper[4802]: I1004 05:14:52.664735 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:14:52 crc kubenswrapper[4802]: I1004 05:14:52.665017 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 05:14:52 crc kubenswrapper[4802]: I1004 05:14:52.665507 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 05:14:52 crc kubenswrapper[4802]: I1004 05:14:52.665564 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" gracePeriod=600 Oct 04 05:14:52 crc kubenswrapper[4802]: E1004 05:14:52.820719 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:14:53 crc kubenswrapper[4802]: I1004 05:14:53.744726 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" exitCode=0 Oct 04 05:14:53 crc kubenswrapper[4802]: I1004 05:14:53.744797 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5"} Oct 04 05:14:53 crc kubenswrapper[4802]: I1004 05:14:53.745107 4802 scope.go:117] "RemoveContainer" containerID="5b26301f92c6ff409155d12712a68269dd9751a178e6afc83d2a6f8069fd1f8e" Oct 04 05:14:53 crc kubenswrapper[4802]: I1004 05:14:53.746256 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:14:53 crc kubenswrapper[4802]: E1004 05:14:53.746693 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:14:54 crc kubenswrapper[4802]: I1004 05:14:54.755162 4802 generic.go:334] "Generic (PLEG): container finished" podID="e6992f33-4605-433b-a5c3-6b227ce6cfd2" containerID="6ae4de67f9b6c15dd8f3e64f74e36474c1c6854bed9e5f615557e1cae9058120" exitCode=0 Oct 04 05:14:54 crc kubenswrapper[4802]: I1004 05:14:54.755250 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" event={"ID":"e6992f33-4605-433b-a5c3-6b227ce6cfd2","Type":"ContainerDied","Data":"6ae4de67f9b6c15dd8f3e64f74e36474c1c6854bed9e5f615557e1cae9058120"} Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.170289 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.228140 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-bootstrap-combined-ca-bundle\") pod \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.228468 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-ssh-key\") pod \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.228501 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-inventory\") pod \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.228610 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97w4p\" (UniqueName: \"kubernetes.io/projected/e6992f33-4605-433b-a5c3-6b227ce6cfd2-kube-api-access-97w4p\") pod \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\" (UID: \"e6992f33-4605-433b-a5c3-6b227ce6cfd2\") " Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.232991 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6992f33-4605-433b-a5c3-6b227ce6cfd2-kube-api-access-97w4p" (OuterVolumeSpecName: "kube-api-access-97w4p") pod "e6992f33-4605-433b-a5c3-6b227ce6cfd2" (UID: "e6992f33-4605-433b-a5c3-6b227ce6cfd2"). InnerVolumeSpecName "kube-api-access-97w4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.233803 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e6992f33-4605-433b-a5c3-6b227ce6cfd2" (UID: "e6992f33-4605-433b-a5c3-6b227ce6cfd2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.253565 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-inventory" (OuterVolumeSpecName: "inventory") pod "e6992f33-4605-433b-a5c3-6b227ce6cfd2" (UID: "e6992f33-4605-433b-a5c3-6b227ce6cfd2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.258787 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e6992f33-4605-433b-a5c3-6b227ce6cfd2" (UID: "e6992f33-4605-433b-a5c3-6b227ce6cfd2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.330259 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.330290 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.330301 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97w4p\" (UniqueName: \"kubernetes.io/projected/e6992f33-4605-433b-a5c3-6b227ce6cfd2-kube-api-access-97w4p\") on node \"crc\" DevicePath \"\"" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.330312 4802 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6992f33-4605-433b-a5c3-6b227ce6cfd2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.778931 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" event={"ID":"e6992f33-4605-433b-a5c3-6b227ce6cfd2","Type":"ContainerDied","Data":"8acd7fafc4c6bd3db7a89bc914796540b8066dbecc3da1829679c8e34edd41ae"} Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.779027 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.779037 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8acd7fafc4c6bd3db7a89bc914796540b8066dbecc3da1829679c8e34edd41ae" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.857694 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv"] Oct 04 05:14:56 crc kubenswrapper[4802]: E1004 05:14:56.858135 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c881409-ff86-4703-92ae-823b4d95e7c9" containerName="extract-content" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.858158 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c881409-ff86-4703-92ae-823b4d95e7c9" containerName="extract-content" Oct 04 05:14:56 crc kubenswrapper[4802]: E1004 05:14:56.858174 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6992f33-4605-433b-a5c3-6b227ce6cfd2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.858184 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6992f33-4605-433b-a5c3-6b227ce6cfd2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 04 05:14:56 crc kubenswrapper[4802]: E1004 05:14:56.858218 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c881409-ff86-4703-92ae-823b4d95e7c9" containerName="extract-utilities" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.858227 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c881409-ff86-4703-92ae-823b4d95e7c9" containerName="extract-utilities" Oct 04 05:14:56 crc kubenswrapper[4802]: E1004 05:14:56.858239 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c881409-ff86-4703-92ae-823b4d95e7c9" containerName="registry-server" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.858246 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c881409-ff86-4703-92ae-823b4d95e7c9" containerName="registry-server" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.858460 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c881409-ff86-4703-92ae-823b4d95e7c9" containerName="registry-server" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.858482 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6992f33-4605-433b-a5c3-6b227ce6cfd2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.859188 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.861366 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.861541 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.861632 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.862084 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.872082 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv"] Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.942234 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsbdg\" (UniqueName: \"kubernetes.io/projected/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-kube-api-access-vsbdg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58cwv\" (UID: \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.942606 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58cwv\" (UID: \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" Oct 04 05:14:56 crc kubenswrapper[4802]: I1004 05:14:56.942749 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58cwv\" (UID: \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" Oct 04 05:14:57 crc kubenswrapper[4802]: I1004 05:14:57.044145 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58cwv\" (UID: \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" Oct 04 05:14:57 crc kubenswrapper[4802]: I1004 05:14:57.044209 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58cwv\" (UID: \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" Oct 04 05:14:57 crc kubenswrapper[4802]: I1004 05:14:57.044273 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsbdg\" (UniqueName: \"kubernetes.io/projected/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-kube-api-access-vsbdg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58cwv\" (UID: \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" Oct 04 05:14:57 crc kubenswrapper[4802]: I1004 05:14:57.048569 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58cwv\" (UID: \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" Oct 04 05:14:57 crc kubenswrapper[4802]: I1004 05:14:57.051621 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58cwv\" (UID: \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" Oct 04 05:14:57 crc kubenswrapper[4802]: I1004 05:14:57.060805 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsbdg\" (UniqueName: \"kubernetes.io/projected/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-kube-api-access-vsbdg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-58cwv\" (UID: \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" Oct 04 05:14:57 crc kubenswrapper[4802]: I1004 05:14:57.178989 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" Oct 04 05:14:57 crc kubenswrapper[4802]: I1004 05:14:57.681824 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv"] Oct 04 05:14:57 crc kubenswrapper[4802]: W1004 05:14:57.686475 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb608de4_6c70_4d9b_8c71_dcb8e1cd9132.slice/crio-9bcb2b129d2b6209f6bf8e304cf20a6cfcf9b83850a81654b742ba8992968fc1 WatchSource:0}: Error finding container 9bcb2b129d2b6209f6bf8e304cf20a6cfcf9b83850a81654b742ba8992968fc1: Status 404 returned error can't find the container with id 9bcb2b129d2b6209f6bf8e304cf20a6cfcf9b83850a81654b742ba8992968fc1 Oct 04 05:14:57 crc kubenswrapper[4802]: I1004 05:14:57.689111 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 05:14:57 crc kubenswrapper[4802]: I1004 05:14:57.788339 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" event={"ID":"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132","Type":"ContainerStarted","Data":"9bcb2b129d2b6209f6bf8e304cf20a6cfcf9b83850a81654b742ba8992968fc1"} Oct 04 05:14:58 crc kubenswrapper[4802]: I1004 05:14:58.441538 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:14:58 crc kubenswrapper[4802]: I1004 05:14:58.799060 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" event={"ID":"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132","Type":"ContainerStarted","Data":"a5a9cde035062909bec6c4f4e0f7a058b0f562576e9321948bd5f12627c52f51"} Oct 04 05:14:58 crc kubenswrapper[4802]: I1004 05:14:58.823286 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" podStartSLOduration=2.074923394 podStartE2EDuration="2.823266766s" podCreationTimestamp="2025-10-04 05:14:56 +0000 UTC" firstStartedPulling="2025-10-04 05:14:57.688829293 +0000 UTC m=+1740.096829918" lastFinishedPulling="2025-10-04 05:14:58.437172665 +0000 UTC m=+1740.845173290" observedRunningTime="2025-10-04 05:14:58.815952457 +0000 UTC m=+1741.223953082" watchObservedRunningTime="2025-10-04 05:14:58.823266766 +0000 UTC m=+1741.231267391" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.141562 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx"] Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.142799 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.146310 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.146330 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.153423 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx"] Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.198838 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/727b1862-5f93-460c-be9c-6bdd40d2a95c-secret-volume\") pod \"collect-profiles-29325915-twnvx\" (UID: \"727b1862-5f93-460c-be9c-6bdd40d2a95c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.198995 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqkvt\" (UniqueName: \"kubernetes.io/projected/727b1862-5f93-460c-be9c-6bdd40d2a95c-kube-api-access-gqkvt\") pod \"collect-profiles-29325915-twnvx\" (UID: \"727b1862-5f93-460c-be9c-6bdd40d2a95c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.199100 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/727b1862-5f93-460c-be9c-6bdd40d2a95c-config-volume\") pod \"collect-profiles-29325915-twnvx\" (UID: \"727b1862-5f93-460c-be9c-6bdd40d2a95c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.300632 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/727b1862-5f93-460c-be9c-6bdd40d2a95c-secret-volume\") pod \"collect-profiles-29325915-twnvx\" (UID: \"727b1862-5f93-460c-be9c-6bdd40d2a95c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.300712 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqkvt\" (UniqueName: \"kubernetes.io/projected/727b1862-5f93-460c-be9c-6bdd40d2a95c-kube-api-access-gqkvt\") pod \"collect-profiles-29325915-twnvx\" (UID: \"727b1862-5f93-460c-be9c-6bdd40d2a95c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.300757 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/727b1862-5f93-460c-be9c-6bdd40d2a95c-config-volume\") pod \"collect-profiles-29325915-twnvx\" (UID: \"727b1862-5f93-460c-be9c-6bdd40d2a95c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.301772 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/727b1862-5f93-460c-be9c-6bdd40d2a95c-config-volume\") pod \"collect-profiles-29325915-twnvx\" (UID: \"727b1862-5f93-460c-be9c-6bdd40d2a95c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.308659 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/727b1862-5f93-460c-be9c-6bdd40d2a95c-secret-volume\") pod \"collect-profiles-29325915-twnvx\" (UID: \"727b1862-5f93-460c-be9c-6bdd40d2a95c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.326218 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqkvt\" (UniqueName: \"kubernetes.io/projected/727b1862-5f93-460c-be9c-6bdd40d2a95c-kube-api-access-gqkvt\") pod \"collect-profiles-29325915-twnvx\" (UID: \"727b1862-5f93-460c-be9c-6bdd40d2a95c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.473157 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" Oct 04 05:15:00 crc kubenswrapper[4802]: I1004 05:15:00.962372 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx"] Oct 04 05:15:01 crc kubenswrapper[4802]: I1004 05:15:01.826037 4802 generic.go:334] "Generic (PLEG): container finished" podID="727b1862-5f93-460c-be9c-6bdd40d2a95c" containerID="c73c49a597409c6e1db4b4726e9f34e15b5d9f4d5599d4847ee17d23d06d107e" exitCode=0 Oct 04 05:15:01 crc kubenswrapper[4802]: I1004 05:15:01.826125 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" event={"ID":"727b1862-5f93-460c-be9c-6bdd40d2a95c","Type":"ContainerDied","Data":"c73c49a597409c6e1db4b4726e9f34e15b5d9f4d5599d4847ee17d23d06d107e"} Oct 04 05:15:01 crc kubenswrapper[4802]: I1004 05:15:01.826280 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" event={"ID":"727b1862-5f93-460c-be9c-6bdd40d2a95c","Type":"ContainerStarted","Data":"3f8c35e502ff6ad9901da62618fe924efb3b0d290030b9a8d0bfab9881df2906"} Oct 04 05:15:03 crc kubenswrapper[4802]: I1004 05:15:03.112434 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" Oct 04 05:15:03 crc kubenswrapper[4802]: I1004 05:15:03.156167 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/727b1862-5f93-460c-be9c-6bdd40d2a95c-config-volume\") pod \"727b1862-5f93-460c-be9c-6bdd40d2a95c\" (UID: \"727b1862-5f93-460c-be9c-6bdd40d2a95c\") " Oct 04 05:15:03 crc kubenswrapper[4802]: I1004 05:15:03.156250 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/727b1862-5f93-460c-be9c-6bdd40d2a95c-secret-volume\") pod \"727b1862-5f93-460c-be9c-6bdd40d2a95c\" (UID: \"727b1862-5f93-460c-be9c-6bdd40d2a95c\") " Oct 04 05:15:03 crc kubenswrapper[4802]: I1004 05:15:03.156313 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqkvt\" (UniqueName: \"kubernetes.io/projected/727b1862-5f93-460c-be9c-6bdd40d2a95c-kube-api-access-gqkvt\") pod \"727b1862-5f93-460c-be9c-6bdd40d2a95c\" (UID: \"727b1862-5f93-460c-be9c-6bdd40d2a95c\") " Oct 04 05:15:03 crc kubenswrapper[4802]: I1004 05:15:03.156922 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/727b1862-5f93-460c-be9c-6bdd40d2a95c-config-volume" (OuterVolumeSpecName: "config-volume") pod "727b1862-5f93-460c-be9c-6bdd40d2a95c" (UID: "727b1862-5f93-460c-be9c-6bdd40d2a95c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:15:03 crc kubenswrapper[4802]: I1004 05:15:03.157731 4802 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/727b1862-5f93-460c-be9c-6bdd40d2a95c-config-volume\") on node \"crc\" DevicePath \"\"" Oct 04 05:15:03 crc kubenswrapper[4802]: I1004 05:15:03.161507 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727b1862-5f93-460c-be9c-6bdd40d2a95c-kube-api-access-gqkvt" (OuterVolumeSpecName: "kube-api-access-gqkvt") pod "727b1862-5f93-460c-be9c-6bdd40d2a95c" (UID: "727b1862-5f93-460c-be9c-6bdd40d2a95c"). InnerVolumeSpecName "kube-api-access-gqkvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:15:03 crc kubenswrapper[4802]: I1004 05:15:03.161499 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727b1862-5f93-460c-be9c-6bdd40d2a95c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "727b1862-5f93-460c-be9c-6bdd40d2a95c" (UID: "727b1862-5f93-460c-be9c-6bdd40d2a95c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:15:03 crc kubenswrapper[4802]: I1004 05:15:03.259281 4802 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/727b1862-5f93-460c-be9c-6bdd40d2a95c-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 04 05:15:03 crc kubenswrapper[4802]: I1004 05:15:03.259323 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqkvt\" (UniqueName: \"kubernetes.io/projected/727b1862-5f93-460c-be9c-6bdd40d2a95c-kube-api-access-gqkvt\") on node \"crc\" DevicePath \"\"" Oct 04 05:15:03 crc kubenswrapper[4802]: I1004 05:15:03.846668 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" event={"ID":"727b1862-5f93-460c-be9c-6bdd40d2a95c","Type":"ContainerDied","Data":"3f8c35e502ff6ad9901da62618fe924efb3b0d290030b9a8d0bfab9881df2906"} Oct 04 05:15:03 crc kubenswrapper[4802]: I1004 05:15:03.846966 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f8c35e502ff6ad9901da62618fe924efb3b0d290030b9a8d0bfab9881df2906" Oct 04 05:15:03 crc kubenswrapper[4802]: I1004 05:15:03.846753 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx" Oct 04 05:15:06 crc kubenswrapper[4802]: I1004 05:15:06.360282 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:15:06 crc kubenswrapper[4802]: E1004 05:15:06.360900 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:15:11 crc kubenswrapper[4802]: I1004 05:15:11.455656 4802 scope.go:117] "RemoveContainer" containerID="9a5850680a8eec3df42e163731d8524413a4593dee56ac9839414be74dd0c3f6" Oct 04 05:15:11 crc kubenswrapper[4802]: I1004 05:15:11.478100 4802 scope.go:117] "RemoveContainer" containerID="66a4397db3aa3b564541c72bf6078b02fa554f07664f5d5cd2c9f149e9548cf7" Oct 04 05:15:11 crc kubenswrapper[4802]: I1004 05:15:11.498260 4802 scope.go:117] "RemoveContainer" containerID="4bc32f197b4992b1b1578f1042653adfedfa919d8468fe73dcff8231ebb786be" Oct 04 05:15:11 crc kubenswrapper[4802]: I1004 05:15:11.515280 4802 scope.go:117] "RemoveContainer" containerID="170b2b484ef067a5895071fc8a0586f43fba7bf4f9e3a93919d3eb2c3d9fac2b" Oct 04 05:15:18 crc kubenswrapper[4802]: I1004 05:15:18.366626 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:15:18 crc kubenswrapper[4802]: E1004 05:15:18.367383 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:15:33 crc kubenswrapper[4802]: I1004 05:15:33.360928 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:15:33 crc kubenswrapper[4802]: E1004 05:15:33.361859 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:15:40 crc kubenswrapper[4802]: I1004 05:15:40.044560 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2m5vb"] Oct 04 05:15:40 crc kubenswrapper[4802]: I1004 05:15:40.058292 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2m5vb"] Oct 04 05:15:40 crc kubenswrapper[4802]: I1004 05:15:40.371416 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1ff275a-01d8-4b28-a347-9e246e4582c5" path="/var/lib/kubelet/pods/c1ff275a-01d8-4b28-a347-9e246e4582c5/volumes" Oct 04 05:15:44 crc kubenswrapper[4802]: I1004 05:15:44.029002 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-8cx9k"] Oct 04 05:15:44 crc kubenswrapper[4802]: I1004 05:15:44.037751 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5f56d"] Oct 04 05:15:44 crc kubenswrapper[4802]: I1004 05:15:44.045126 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-8cx9k"] Oct 04 05:15:44 crc kubenswrapper[4802]: I1004 05:15:44.051961 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5f56d"] Oct 04 05:15:44 crc kubenswrapper[4802]: I1004 05:15:44.369760 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1a4ed4-c9df-4edc-888c-4082e207cb07" path="/var/lib/kubelet/pods/4c1a4ed4-c9df-4edc-888c-4082e207cb07/volumes" Oct 04 05:15:44 crc kubenswrapper[4802]: I1004 05:15:44.371040 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63205f59-5c09-40c6-a124-524f46f70914" path="/var/lib/kubelet/pods/63205f59-5c09-40c6-a124-524f46f70914/volumes" Oct 04 05:15:48 crc kubenswrapper[4802]: I1004 05:15:48.365579 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:15:48 crc kubenswrapper[4802]: E1004 05:15:48.367149 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:15:49 crc kubenswrapper[4802]: I1004 05:15:49.039335 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-77a1-account-create-4qr87"] Oct 04 05:15:49 crc kubenswrapper[4802]: I1004 05:15:49.067839 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-77a1-account-create-4qr87"] Oct 04 05:15:50 crc kubenswrapper[4802]: I1004 05:15:50.370057 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f0cad5-fa50-4233-b440-1dc9a2afa31e" path="/var/lib/kubelet/pods/07f0cad5-fa50-4233-b440-1dc9a2afa31e/volumes" Oct 04 05:15:56 crc kubenswrapper[4802]: I1004 05:15:56.026394 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cf51-account-create-hpqs2"] Oct 04 05:15:56 crc kubenswrapper[4802]: I1004 05:15:56.037968 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7995-account-create-ccmzc"] Oct 04 05:15:56 crc kubenswrapper[4802]: I1004 05:15:56.048141 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-cf51-account-create-hpqs2"] Oct 04 05:15:56 crc kubenswrapper[4802]: I1004 05:15:56.057660 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7995-account-create-ccmzc"] Oct 04 05:15:56 crc kubenswrapper[4802]: I1004 05:15:56.371381 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="419d0cb8-1278-4521-a0b6-207795fdd75e" path="/var/lib/kubelet/pods/419d0cb8-1278-4521-a0b6-207795fdd75e/volumes" Oct 04 05:15:56 crc kubenswrapper[4802]: I1004 05:15:56.372107 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f65253-f4a0-477e-a6f1-9773fde497b5" path="/var/lib/kubelet/pods/80f65253-f4a0-477e-a6f1-9773fde497b5/volumes" Oct 04 05:16:01 crc kubenswrapper[4802]: I1004 05:16:01.359547 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:16:01 crc kubenswrapper[4802]: E1004 05:16:01.360249 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:16:09 crc kubenswrapper[4802]: I1004 05:16:09.029105 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-vqjw4"] Oct 04 05:16:09 crc kubenswrapper[4802]: I1004 05:16:09.037378 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-vqjw4"] Oct 04 05:16:09 crc kubenswrapper[4802]: I1004 05:16:09.446997 4802 generic.go:334] "Generic (PLEG): container finished" podID="eb608de4-6c70-4d9b-8c71-dcb8e1cd9132" containerID="a5a9cde035062909bec6c4f4e0f7a058b0f562576e9321948bd5f12627c52f51" exitCode=0 Oct 04 05:16:09 crc kubenswrapper[4802]: I1004 05:16:09.447040 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" event={"ID":"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132","Type":"ContainerDied","Data":"a5a9cde035062909bec6c4f4e0f7a058b0f562576e9321948bd5f12627c52f51"} Oct 04 05:16:10 crc kubenswrapper[4802]: I1004 05:16:10.026379 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-qb9xs"] Oct 04 05:16:10 crc kubenswrapper[4802]: I1004 05:16:10.036740 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-stbxp"] Oct 04 05:16:10 crc kubenswrapper[4802]: I1004 05:16:10.045912 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-stbxp"] Oct 04 05:16:10 crc kubenswrapper[4802]: I1004 05:16:10.054748 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-qb9xs"] Oct 04 05:16:10 crc kubenswrapper[4802]: I1004 05:16:10.374050 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="806dfe15-9b99-4889-9bfe-4202e609e41a" path="/var/lib/kubelet/pods/806dfe15-9b99-4889-9bfe-4202e609e41a/volumes" Oct 04 05:16:10 crc kubenswrapper[4802]: I1004 05:16:10.374966 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad98d92d-f119-4120-a3ca-e309fa442279" path="/var/lib/kubelet/pods/ad98d92d-f119-4120-a3ca-e309fa442279/volumes" Oct 04 05:16:10 crc kubenswrapper[4802]: I1004 05:16:10.375628 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edce7589-9918-41f7-9d90-c5463388138f" path="/var/lib/kubelet/pods/edce7589-9918-41f7-9d90-c5463388138f/volumes" Oct 04 05:16:10 crc kubenswrapper[4802]: I1004 05:16:10.832454 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" Oct 04 05:16:10 crc kubenswrapper[4802]: I1004 05:16:10.998552 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-inventory\") pod \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\" (UID: \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\") " Oct 04 05:16:10 crc kubenswrapper[4802]: I1004 05:16:10.998853 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsbdg\" (UniqueName: \"kubernetes.io/projected/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-kube-api-access-vsbdg\") pod \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\" (UID: \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\") " Oct 04 05:16:10 crc kubenswrapper[4802]: I1004 05:16:10.998914 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-ssh-key\") pod \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\" (UID: \"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132\") " Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.004829 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-kube-api-access-vsbdg" (OuterVolumeSpecName: "kube-api-access-vsbdg") pod "eb608de4-6c70-4d9b-8c71-dcb8e1cd9132" (UID: "eb608de4-6c70-4d9b-8c71-dcb8e1cd9132"). InnerVolumeSpecName "kube-api-access-vsbdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.029174 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eb608de4-6c70-4d9b-8c71-dcb8e1cd9132" (UID: "eb608de4-6c70-4d9b-8c71-dcb8e1cd9132"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.029852 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-inventory" (OuterVolumeSpecName: "inventory") pod "eb608de4-6c70-4d9b-8c71-dcb8e1cd9132" (UID: "eb608de4-6c70-4d9b-8c71-dcb8e1cd9132"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.101284 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.101348 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsbdg\" (UniqueName: \"kubernetes.io/projected/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-kube-api-access-vsbdg\") on node \"crc\" DevicePath \"\"" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.101362 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.464743 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" event={"ID":"eb608de4-6c70-4d9b-8c71-dcb8e1cd9132","Type":"ContainerDied","Data":"9bcb2b129d2b6209f6bf8e304cf20a6cfcf9b83850a81654b742ba8992968fc1"} Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.464788 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bcb2b129d2b6209f6bf8e304cf20a6cfcf9b83850a81654b742ba8992968fc1" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.464812 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.544271 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq"] Oct 04 05:16:11 crc kubenswrapper[4802]: E1004 05:16:11.544710 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb608de4-6c70-4d9b-8c71-dcb8e1cd9132" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.544738 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb608de4-6c70-4d9b-8c71-dcb8e1cd9132" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 04 05:16:11 crc kubenswrapper[4802]: E1004 05:16:11.544812 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727b1862-5f93-460c-be9c-6bdd40d2a95c" containerName="collect-profiles" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.544822 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="727b1862-5f93-460c-be9c-6bdd40d2a95c" containerName="collect-profiles" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.545016 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb608de4-6c70-4d9b-8c71-dcb8e1cd9132" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.545056 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="727b1862-5f93-460c-be9c-6bdd40d2a95c" containerName="collect-profiles" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.545819 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.547951 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.548336 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.548672 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.549018 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.566944 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq"] Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.575885 4802 scope.go:117] "RemoveContainer" containerID="7270b2d21b8872762ff0ed9fa95c9649f68bb1048ee57a0ebaece313a06d75d7" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.595855 4802 scope.go:117] "RemoveContainer" containerID="f36c35a3acade327eb26707957db318e22130b8614b8b60df3cb392eed9d7fd4" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.639525 4802 scope.go:117] "RemoveContainer" containerID="bf4169ee1e91fde9d1c7ee7b42a6a3db1082e54d764d9667358530e3a965c51f" Oct 04 05:16:11 crc kubenswrapper[4802]: E1004 05:16:11.654404 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb608de4_6c70_4d9b_8c71_dcb8e1cd9132.slice/crio-9bcb2b129d2b6209f6bf8e304cf20a6cfcf9b83850a81654b742ba8992968fc1\": RecentStats: unable to find data in memory cache]" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.661146 4802 scope.go:117] "RemoveContainer" containerID="4d77bf6a9a71e08a591449c56506a2e3d8be7ac379462f9cffc53d05dd0ca081" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.682853 4802 scope.go:117] "RemoveContainer" containerID="5ae8b4db53f23af26a2918130b6285349ed2a548267eb703ee0e00573e76e47c" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.712990 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq\" (UID: \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.713038 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdqlt\" (UniqueName: \"kubernetes.io/projected/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-kube-api-access-cdqlt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq\" (UID: \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.713124 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq\" (UID: \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.714338 4802 scope.go:117] "RemoveContainer" containerID="a458e41944b5197a1326b0e25eb2a7378db1fdee7da19b23915070165725cb7d" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.735207 4802 scope.go:117] "RemoveContainer" containerID="ff284b4fa3949ed045dd522e2605da57eefd71949992099d58be1aae7ac0273c" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.793070 4802 scope.go:117] "RemoveContainer" containerID="a10810ffa7d524cb8fbff339ea49a2ce39578cb3c79149b6e1a7816a26d72c7e" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.814205 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq\" (UID: \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.814245 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdqlt\" (UniqueName: \"kubernetes.io/projected/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-kube-api-access-cdqlt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq\" (UID: \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.814305 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq\" (UID: \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.818472 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq\" (UID: \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.818764 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq\" (UID: \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.831915 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdqlt\" (UniqueName: \"kubernetes.io/projected/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-kube-api-access-cdqlt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq\" (UID: \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.865396 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" Oct 04 05:16:11 crc kubenswrapper[4802]: I1004 05:16:11.949419 4802 scope.go:117] "RemoveContainer" containerID="29cb0a31221ef7d6224ae18fe223f10ecbbda0d8063076f24580c1ae81000fff" Oct 04 05:16:12 crc kubenswrapper[4802]: I1004 05:16:12.388379 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq"] Oct 04 05:16:12 crc kubenswrapper[4802]: I1004 05:16:12.474679 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" event={"ID":"f7a8891c-7f32-4c7e-8f71-8f359dd8de14","Type":"ContainerStarted","Data":"09406d055e0a7b5725fee52f0b1bf605ce926fb45385726c1e44c75974b17f87"} Oct 04 05:16:13 crc kubenswrapper[4802]: I1004 05:16:13.360331 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:16:13 crc kubenswrapper[4802]: E1004 05:16:13.360894 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:16:13 crc kubenswrapper[4802]: I1004 05:16:13.483271 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" event={"ID":"f7a8891c-7f32-4c7e-8f71-8f359dd8de14","Type":"ContainerStarted","Data":"d774fd189a818c341f413dbfdd5a2ccb1087e9ab98fc76fd3ed1f1bacf0fe5cb"} Oct 04 05:16:13 crc kubenswrapper[4802]: I1004 05:16:13.503501 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" podStartSLOduration=2.072745325 podStartE2EDuration="2.503480714s" podCreationTimestamp="2025-10-04 05:16:11 +0000 UTC" firstStartedPulling="2025-10-04 05:16:12.393594439 +0000 UTC m=+1814.801595064" lastFinishedPulling="2025-10-04 05:16:12.824329828 +0000 UTC m=+1815.232330453" observedRunningTime="2025-10-04 05:16:13.497234061 +0000 UTC m=+1815.905234696" watchObservedRunningTime="2025-10-04 05:16:13.503480714 +0000 UTC m=+1815.911481339" Oct 04 05:16:14 crc kubenswrapper[4802]: I1004 05:16:14.025919 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-l46qt"] Oct 04 05:16:14 crc kubenswrapper[4802]: I1004 05:16:14.033693 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-rn59p"] Oct 04 05:16:14 crc kubenswrapper[4802]: I1004 05:16:14.041515 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-l46qt"] Oct 04 05:16:14 crc kubenswrapper[4802]: I1004 05:16:14.050113 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-rn59p"] Oct 04 05:16:14 crc kubenswrapper[4802]: I1004 05:16:14.372616 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed629005-3761-4623-99e5-723e05932230" path="/var/lib/kubelet/pods/ed629005-3761-4623-99e5-723e05932230/volumes" Oct 04 05:16:14 crc kubenswrapper[4802]: I1004 05:16:14.373430 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f144c400-3cd4-4933-a2c6-ab57e96bb6d9" path="/var/lib/kubelet/pods/f144c400-3cd4-4933-a2c6-ab57e96bb6d9/volumes" Oct 04 05:16:17 crc kubenswrapper[4802]: I1004 05:16:17.519165 4802 generic.go:334] "Generic (PLEG): container finished" podID="f7a8891c-7f32-4c7e-8f71-8f359dd8de14" containerID="d774fd189a818c341f413dbfdd5a2ccb1087e9ab98fc76fd3ed1f1bacf0fe5cb" exitCode=0 Oct 04 05:16:17 crc kubenswrapper[4802]: I1004 05:16:17.519241 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" event={"ID":"f7a8891c-7f32-4c7e-8f71-8f359dd8de14","Type":"ContainerDied","Data":"d774fd189a818c341f413dbfdd5a2ccb1087e9ab98fc76fd3ed1f1bacf0fe5cb"} Oct 04 05:16:18 crc kubenswrapper[4802]: I1004 05:16:18.872049 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.042243 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-inventory\") pod \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\" (UID: \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\") " Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.042349 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdqlt\" (UniqueName: \"kubernetes.io/projected/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-kube-api-access-cdqlt\") pod \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\" (UID: \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\") " Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.042445 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-ssh-key\") pod \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\" (UID: \"f7a8891c-7f32-4c7e-8f71-8f359dd8de14\") " Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.047934 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-kube-api-access-cdqlt" (OuterVolumeSpecName: "kube-api-access-cdqlt") pod "f7a8891c-7f32-4c7e-8f71-8f359dd8de14" (UID: "f7a8891c-7f32-4c7e-8f71-8f359dd8de14"). InnerVolumeSpecName "kube-api-access-cdqlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.090418 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f7a8891c-7f32-4c7e-8f71-8f359dd8de14" (UID: "f7a8891c-7f32-4c7e-8f71-8f359dd8de14"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.090468 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-inventory" (OuterVolumeSpecName: "inventory") pod "f7a8891c-7f32-4c7e-8f71-8f359dd8de14" (UID: "f7a8891c-7f32-4c7e-8f71-8f359dd8de14"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.144427 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.144463 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.144473 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdqlt\" (UniqueName: \"kubernetes.io/projected/f7a8891c-7f32-4c7e-8f71-8f359dd8de14-kube-api-access-cdqlt\") on node \"crc\" DevicePath \"\"" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.536752 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" event={"ID":"f7a8891c-7f32-4c7e-8f71-8f359dd8de14","Type":"ContainerDied","Data":"09406d055e0a7b5725fee52f0b1bf605ce926fb45385726c1e44c75974b17f87"} Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.536800 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09406d055e0a7b5725fee52f0b1bf605ce926fb45385726c1e44c75974b17f87" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.536877 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.950257 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh"] Oct 04 05:16:19 crc kubenswrapper[4802]: E1004 05:16:19.950739 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a8891c-7f32-4c7e-8f71-8f359dd8de14" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.950759 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a8891c-7f32-4c7e-8f71-8f359dd8de14" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.950992 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a8891c-7f32-4c7e-8f71-8f359dd8de14" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.952024 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.957147 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.957333 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.957442 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.958000 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:16:19 crc kubenswrapper[4802]: I1004 05:16:19.960984 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh"] Oct 04 05:16:20 crc kubenswrapper[4802]: I1004 05:16:20.059542 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6f8085b-5435-44e5-ad0d-d189e218f138-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2cgh\" (UID: \"b6f8085b-5435-44e5-ad0d-d189e218f138\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" Oct 04 05:16:20 crc kubenswrapper[4802]: I1004 05:16:20.059608 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6f8085b-5435-44e5-ad0d-d189e218f138-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2cgh\" (UID: \"b6f8085b-5435-44e5-ad0d-d189e218f138\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" Oct 04 05:16:20 crc kubenswrapper[4802]: I1004 05:16:20.059912 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlplj\" (UniqueName: \"kubernetes.io/projected/b6f8085b-5435-44e5-ad0d-d189e218f138-kube-api-access-qlplj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2cgh\" (UID: \"b6f8085b-5435-44e5-ad0d-d189e218f138\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" Oct 04 05:16:20 crc kubenswrapper[4802]: I1004 05:16:20.161318 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlplj\" (UniqueName: \"kubernetes.io/projected/b6f8085b-5435-44e5-ad0d-d189e218f138-kube-api-access-qlplj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2cgh\" (UID: \"b6f8085b-5435-44e5-ad0d-d189e218f138\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" Oct 04 05:16:20 crc kubenswrapper[4802]: I1004 05:16:20.161378 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6f8085b-5435-44e5-ad0d-d189e218f138-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2cgh\" (UID: \"b6f8085b-5435-44e5-ad0d-d189e218f138\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" Oct 04 05:16:20 crc kubenswrapper[4802]: I1004 05:16:20.161423 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6f8085b-5435-44e5-ad0d-d189e218f138-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2cgh\" (UID: \"b6f8085b-5435-44e5-ad0d-d189e218f138\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" Oct 04 05:16:20 crc kubenswrapper[4802]: I1004 05:16:20.169387 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6f8085b-5435-44e5-ad0d-d189e218f138-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2cgh\" (UID: \"b6f8085b-5435-44e5-ad0d-d189e218f138\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" Oct 04 05:16:20 crc kubenswrapper[4802]: I1004 05:16:20.175071 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6f8085b-5435-44e5-ad0d-d189e218f138-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2cgh\" (UID: \"b6f8085b-5435-44e5-ad0d-d189e218f138\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" Oct 04 05:16:20 crc kubenswrapper[4802]: I1004 05:16:20.177731 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlplj\" (UniqueName: \"kubernetes.io/projected/b6f8085b-5435-44e5-ad0d-d189e218f138-kube-api-access-qlplj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2cgh\" (UID: \"b6f8085b-5435-44e5-ad0d-d189e218f138\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" Oct 04 05:16:20 crc kubenswrapper[4802]: I1004 05:16:20.276186 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" Oct 04 05:16:20 crc kubenswrapper[4802]: W1004 05:16:20.775015 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6f8085b_5435_44e5_ad0d_d189e218f138.slice/crio-abfa452ffe77432890d578e5ffda95d41d96672760a4d72c351366002ff0e241 WatchSource:0}: Error finding container abfa452ffe77432890d578e5ffda95d41d96672760a4d72c351366002ff0e241: Status 404 returned error can't find the container with id abfa452ffe77432890d578e5ffda95d41d96672760a4d72c351366002ff0e241 Oct 04 05:16:20 crc kubenswrapper[4802]: I1004 05:16:20.778952 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh"] Oct 04 05:16:21 crc kubenswrapper[4802]: I1004 05:16:21.556266 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" event={"ID":"b6f8085b-5435-44e5-ad0d-d189e218f138","Type":"ContainerStarted","Data":"abfa452ffe77432890d578e5ffda95d41d96672760a4d72c351366002ff0e241"} Oct 04 05:16:22 crc kubenswrapper[4802]: I1004 05:16:22.568189 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" event={"ID":"b6f8085b-5435-44e5-ad0d-d189e218f138","Type":"ContainerStarted","Data":"a28c9fb77eabba6ad40293f646e33170f2afcfb43e38d2ae1ae01444d005ccab"} Oct 04 05:16:22 crc kubenswrapper[4802]: I1004 05:16:22.588977 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" podStartSLOduration=3.000073431 podStartE2EDuration="3.588952964s" podCreationTimestamp="2025-10-04 05:16:19 +0000 UTC" firstStartedPulling="2025-10-04 05:16:20.776955391 +0000 UTC m=+1823.184956016" lastFinishedPulling="2025-10-04 05:16:21.365834924 +0000 UTC m=+1823.773835549" observedRunningTime="2025-10-04 05:16:22.583657094 +0000 UTC m=+1824.991657729" watchObservedRunningTime="2025-10-04 05:16:22.588952964 +0000 UTC m=+1824.996953589" Oct 04 05:16:25 crc kubenswrapper[4802]: I1004 05:16:25.359443 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:16:25 crc kubenswrapper[4802]: E1004 05:16:25.360066 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:16:29 crc kubenswrapper[4802]: I1004 05:16:29.035929 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-93e5-account-create-bq7mn"] Oct 04 05:16:29 crc kubenswrapper[4802]: I1004 05:16:29.046341 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c331-account-create-9dbks"] Oct 04 05:16:29 crc kubenswrapper[4802]: I1004 05:16:29.056008 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-93e5-account-create-bq7mn"] Oct 04 05:16:29 crc kubenswrapper[4802]: I1004 05:16:29.064039 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d697-account-create-fm82l"] Oct 04 05:16:29 crc kubenswrapper[4802]: I1004 05:16:29.071415 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c331-account-create-9dbks"] Oct 04 05:16:29 crc kubenswrapper[4802]: I1004 05:16:29.079038 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d697-account-create-fm82l"] Oct 04 05:16:30 crc kubenswrapper[4802]: I1004 05:16:30.028034 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-jk5tx"] Oct 04 05:16:30 crc kubenswrapper[4802]: I1004 05:16:30.038986 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-jk5tx"] Oct 04 05:16:30 crc kubenswrapper[4802]: I1004 05:16:30.369991 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d077b7d-471d-4f5c-a970-0c8d775643dc" path="/var/lib/kubelet/pods/1d077b7d-471d-4f5c-a970-0c8d775643dc/volumes" Oct 04 05:16:30 crc kubenswrapper[4802]: I1004 05:16:30.371179 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21b0ce29-901c-4bdb-a6ca-a5dfb3987559" path="/var/lib/kubelet/pods/21b0ce29-901c-4bdb-a6ca-a5dfb3987559/volumes" Oct 04 05:16:30 crc kubenswrapper[4802]: I1004 05:16:30.371860 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f0f5f8-50f9-4c45-9223-7c43bd900627" path="/var/lib/kubelet/pods/e1f0f5f8-50f9-4c45-9223-7c43bd900627/volumes" Oct 04 05:16:30 crc kubenswrapper[4802]: I1004 05:16:30.372618 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5a68aa1-61b6-4151-b77b-8b107570d0e6" path="/var/lib/kubelet/pods/f5a68aa1-61b6-4151-b77b-8b107570d0e6/volumes" Oct 04 05:16:35 crc kubenswrapper[4802]: I1004 05:16:35.035286 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ghjl7"] Oct 04 05:16:35 crc kubenswrapper[4802]: I1004 05:16:35.044108 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ghjl7"] Oct 04 05:16:36 crc kubenswrapper[4802]: I1004 05:16:36.388264 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28fb85c9-6063-44b2-871a-4c39ae649b9c" path="/var/lib/kubelet/pods/28fb85c9-6063-44b2-871a-4c39ae649b9c/volumes" Oct 04 05:16:38 crc kubenswrapper[4802]: I1004 05:16:38.364712 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:16:38 crc kubenswrapper[4802]: E1004 05:16:38.365208 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:16:49 crc kubenswrapper[4802]: I1004 05:16:49.360282 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:16:49 crc kubenswrapper[4802]: E1004 05:16:49.361568 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:16:55 crc kubenswrapper[4802]: I1004 05:16:55.843608 4802 generic.go:334] "Generic (PLEG): container finished" podID="b6f8085b-5435-44e5-ad0d-d189e218f138" containerID="a28c9fb77eabba6ad40293f646e33170f2afcfb43e38d2ae1ae01444d005ccab" exitCode=0 Oct 04 05:16:55 crc kubenswrapper[4802]: I1004 05:16:55.843757 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" event={"ID":"b6f8085b-5435-44e5-ad0d-d189e218f138","Type":"ContainerDied","Data":"a28c9fb77eabba6ad40293f646e33170f2afcfb43e38d2ae1ae01444d005ccab"} Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.232347 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.320309 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6f8085b-5435-44e5-ad0d-d189e218f138-ssh-key\") pod \"b6f8085b-5435-44e5-ad0d-d189e218f138\" (UID: \"b6f8085b-5435-44e5-ad0d-d189e218f138\") " Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.320584 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlplj\" (UniqueName: \"kubernetes.io/projected/b6f8085b-5435-44e5-ad0d-d189e218f138-kube-api-access-qlplj\") pod \"b6f8085b-5435-44e5-ad0d-d189e218f138\" (UID: \"b6f8085b-5435-44e5-ad0d-d189e218f138\") " Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.320692 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6f8085b-5435-44e5-ad0d-d189e218f138-inventory\") pod \"b6f8085b-5435-44e5-ad0d-d189e218f138\" (UID: \"b6f8085b-5435-44e5-ad0d-d189e218f138\") " Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.327853 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6f8085b-5435-44e5-ad0d-d189e218f138-kube-api-access-qlplj" (OuterVolumeSpecName: "kube-api-access-qlplj") pod "b6f8085b-5435-44e5-ad0d-d189e218f138" (UID: "b6f8085b-5435-44e5-ad0d-d189e218f138"). InnerVolumeSpecName "kube-api-access-qlplj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.346989 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f8085b-5435-44e5-ad0d-d189e218f138-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b6f8085b-5435-44e5-ad0d-d189e218f138" (UID: "b6f8085b-5435-44e5-ad0d-d189e218f138"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.348841 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f8085b-5435-44e5-ad0d-d189e218f138-inventory" (OuterVolumeSpecName: "inventory") pod "b6f8085b-5435-44e5-ad0d-d189e218f138" (UID: "b6f8085b-5435-44e5-ad0d-d189e218f138"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.423243 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6f8085b-5435-44e5-ad0d-d189e218f138-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.423503 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlplj\" (UniqueName: \"kubernetes.io/projected/b6f8085b-5435-44e5-ad0d-d189e218f138-kube-api-access-qlplj\") on node \"crc\" DevicePath \"\"" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.423611 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6f8085b-5435-44e5-ad0d-d189e218f138-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.861606 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" event={"ID":"b6f8085b-5435-44e5-ad0d-d189e218f138","Type":"ContainerDied","Data":"abfa452ffe77432890d578e5ffda95d41d96672760a4d72c351366002ff0e241"} Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.861673 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abfa452ffe77432890d578e5ffda95d41d96672760a4d72c351366002ff0e241" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.861674 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.941350 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc"] Oct 04 05:16:57 crc kubenswrapper[4802]: E1004 05:16:57.941780 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6f8085b-5435-44e5-ad0d-d189e218f138" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.941801 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6f8085b-5435-44e5-ad0d-d189e218f138" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.942006 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6f8085b-5435-44e5-ad0d-d189e218f138" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.942666 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.945887 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.946032 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.947729 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.947764 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:16:57 crc kubenswrapper[4802]: I1004 05:16:57.953817 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc"] Oct 04 05:16:58 crc kubenswrapper[4802]: I1004 05:16:58.145341 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d27g5\" (UniqueName: \"kubernetes.io/projected/52b35f0d-4354-452d-96d8-4079505bc44b-kube-api-access-d27g5\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc\" (UID: \"52b35f0d-4354-452d-96d8-4079505bc44b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" Oct 04 05:16:58 crc kubenswrapper[4802]: I1004 05:16:58.145461 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52b35f0d-4354-452d-96d8-4079505bc44b-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc\" (UID: \"52b35f0d-4354-452d-96d8-4079505bc44b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" Oct 04 05:16:58 crc kubenswrapper[4802]: I1004 05:16:58.145750 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52b35f0d-4354-452d-96d8-4079505bc44b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc\" (UID: \"52b35f0d-4354-452d-96d8-4079505bc44b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" Oct 04 05:16:58 crc kubenswrapper[4802]: I1004 05:16:58.247347 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52b35f0d-4354-452d-96d8-4079505bc44b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc\" (UID: \"52b35f0d-4354-452d-96d8-4079505bc44b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" Oct 04 05:16:58 crc kubenswrapper[4802]: I1004 05:16:58.247427 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d27g5\" (UniqueName: \"kubernetes.io/projected/52b35f0d-4354-452d-96d8-4079505bc44b-kube-api-access-d27g5\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc\" (UID: \"52b35f0d-4354-452d-96d8-4079505bc44b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" Oct 04 05:16:58 crc kubenswrapper[4802]: I1004 05:16:58.247542 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52b35f0d-4354-452d-96d8-4079505bc44b-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc\" (UID: \"52b35f0d-4354-452d-96d8-4079505bc44b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" Oct 04 05:16:58 crc kubenswrapper[4802]: I1004 05:16:58.251630 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52b35f0d-4354-452d-96d8-4079505bc44b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc\" (UID: \"52b35f0d-4354-452d-96d8-4079505bc44b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" Oct 04 05:16:58 crc kubenswrapper[4802]: I1004 05:16:58.257145 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52b35f0d-4354-452d-96d8-4079505bc44b-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc\" (UID: \"52b35f0d-4354-452d-96d8-4079505bc44b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" Oct 04 05:16:58 crc kubenswrapper[4802]: I1004 05:16:58.265267 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d27g5\" (UniqueName: \"kubernetes.io/projected/52b35f0d-4354-452d-96d8-4079505bc44b-kube-api-access-d27g5\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc\" (UID: \"52b35f0d-4354-452d-96d8-4079505bc44b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" Oct 04 05:16:58 crc kubenswrapper[4802]: I1004 05:16:58.562208 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:16:58 crc kubenswrapper[4802]: I1004 05:16:58.570927 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" Oct 04 05:16:59 crc kubenswrapper[4802]: I1004 05:16:59.041213 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-829jj"] Oct 04 05:16:59 crc kubenswrapper[4802]: I1004 05:16:59.053607 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-829jj"] Oct 04 05:16:59 crc kubenswrapper[4802]: I1004 05:16:59.116800 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc"] Oct 04 05:16:59 crc kubenswrapper[4802]: W1004 05:16:59.119635 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52b35f0d_4354_452d_96d8_4079505bc44b.slice/crio-e286a89e9f3a51d690c9eb87ad127d9ba2708c328b88e575d1bc67ca4054f1b4 WatchSource:0}: Error finding container e286a89e9f3a51d690c9eb87ad127d9ba2708c328b88e575d1bc67ca4054f1b4: Status 404 returned error can't find the container with id e286a89e9f3a51d690c9eb87ad127d9ba2708c328b88e575d1bc67ca4054f1b4 Oct 04 05:16:59 crc kubenswrapper[4802]: I1004 05:16:59.635985 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:16:59 crc kubenswrapper[4802]: I1004 05:16:59.880932 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" event={"ID":"52b35f0d-4354-452d-96d8-4079505bc44b","Type":"ContainerStarted","Data":"2a5bf269b62e57ff07c6181fc1babb95026c83a0a58cc50335d1a5701d9a1b70"} Oct 04 05:16:59 crc kubenswrapper[4802]: I1004 05:16:59.881259 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" event={"ID":"52b35f0d-4354-452d-96d8-4079505bc44b","Type":"ContainerStarted","Data":"e286a89e9f3a51d690c9eb87ad127d9ba2708c328b88e575d1bc67ca4054f1b4"} Oct 04 05:16:59 crc kubenswrapper[4802]: I1004 05:16:59.895930 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" podStartSLOduration=2.384946073 podStartE2EDuration="2.895912265s" podCreationTimestamp="2025-10-04 05:16:57 +0000 UTC" firstStartedPulling="2025-10-04 05:16:59.122460715 +0000 UTC m=+1861.530461340" lastFinishedPulling="2025-10-04 05:16:59.633426907 +0000 UTC m=+1862.041427532" observedRunningTime="2025-10-04 05:16:59.892927652 +0000 UTC m=+1862.300928277" watchObservedRunningTime="2025-10-04 05:16:59.895912265 +0000 UTC m=+1862.303912890" Oct 04 05:17:00 crc kubenswrapper[4802]: I1004 05:17:00.373248 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6397562-8380-4277-8f96-bf264f7049a2" path="/var/lib/kubelet/pods/b6397562-8380-4277-8f96-bf264f7049a2/volumes" Oct 04 05:17:03 crc kubenswrapper[4802]: I1004 05:17:03.028752 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-pm9gl"] Oct 04 05:17:03 crc kubenswrapper[4802]: I1004 05:17:03.036727 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-pm9gl"] Oct 04 05:17:03 crc kubenswrapper[4802]: I1004 05:17:03.916372 4802 generic.go:334] "Generic (PLEG): container finished" podID="52b35f0d-4354-452d-96d8-4079505bc44b" containerID="2a5bf269b62e57ff07c6181fc1babb95026c83a0a58cc50335d1a5701d9a1b70" exitCode=0 Oct 04 05:17:03 crc kubenswrapper[4802]: I1004 05:17:03.916422 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" event={"ID":"52b35f0d-4354-452d-96d8-4079505bc44b","Type":"ContainerDied","Data":"2a5bf269b62e57ff07c6181fc1babb95026c83a0a58cc50335d1a5701d9a1b70"} Oct 04 05:17:04 crc kubenswrapper[4802]: I1004 05:17:04.359579 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:17:04 crc kubenswrapper[4802]: E1004 05:17:04.360519 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:17:04 crc kubenswrapper[4802]: I1004 05:17:04.372593 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27caca0-bc37-41bd-a5fb-3536cfc1dfa1" path="/var/lib/kubelet/pods/b27caca0-bc37-41bd-a5fb-3536cfc1dfa1/volumes" Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.294595 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.479911 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52b35f0d-4354-452d-96d8-4079505bc44b-inventory\") pod \"52b35f0d-4354-452d-96d8-4079505bc44b\" (UID: \"52b35f0d-4354-452d-96d8-4079505bc44b\") " Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.479975 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52b35f0d-4354-452d-96d8-4079505bc44b-ssh-key\") pod \"52b35f0d-4354-452d-96d8-4079505bc44b\" (UID: \"52b35f0d-4354-452d-96d8-4079505bc44b\") " Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.480140 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d27g5\" (UniqueName: \"kubernetes.io/projected/52b35f0d-4354-452d-96d8-4079505bc44b-kube-api-access-d27g5\") pod \"52b35f0d-4354-452d-96d8-4079505bc44b\" (UID: \"52b35f0d-4354-452d-96d8-4079505bc44b\") " Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.485478 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b35f0d-4354-452d-96d8-4079505bc44b-kube-api-access-d27g5" (OuterVolumeSpecName: "kube-api-access-d27g5") pod "52b35f0d-4354-452d-96d8-4079505bc44b" (UID: "52b35f0d-4354-452d-96d8-4079505bc44b"). InnerVolumeSpecName "kube-api-access-d27g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.506357 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b35f0d-4354-452d-96d8-4079505bc44b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "52b35f0d-4354-452d-96d8-4079505bc44b" (UID: "52b35f0d-4354-452d-96d8-4079505bc44b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.510149 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b35f0d-4354-452d-96d8-4079505bc44b-inventory" (OuterVolumeSpecName: "inventory") pod "52b35f0d-4354-452d-96d8-4079505bc44b" (UID: "52b35f0d-4354-452d-96d8-4079505bc44b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.583013 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d27g5\" (UniqueName: \"kubernetes.io/projected/52b35f0d-4354-452d-96d8-4079505bc44b-kube-api-access-d27g5\") on node \"crc\" DevicePath \"\"" Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.583047 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52b35f0d-4354-452d-96d8-4079505bc44b-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.583056 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52b35f0d-4354-452d-96d8-4079505bc44b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.934006 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" event={"ID":"52b35f0d-4354-452d-96d8-4079505bc44b","Type":"ContainerDied","Data":"e286a89e9f3a51d690c9eb87ad127d9ba2708c328b88e575d1bc67ca4054f1b4"} Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.934060 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e286a89e9f3a51d690c9eb87ad127d9ba2708c328b88e575d1bc67ca4054f1b4" Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.934059 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc" Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.998169 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn"] Oct 04 05:17:05 crc kubenswrapper[4802]: E1004 05:17:05.998566 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b35f0d-4354-452d-96d8-4079505bc44b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.998587 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b35f0d-4354-452d-96d8-4079505bc44b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.998783 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b35f0d-4354-452d-96d8-4079505bc44b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 04 05:17:05 crc kubenswrapper[4802]: I1004 05:17:05.999426 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.001619 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.002549 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.002695 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.003562 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.026399 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn"] Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.194483 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn\" (UID: \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.194895 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn\" (UID: \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.195130 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkzks\" (UniqueName: \"kubernetes.io/projected/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-kube-api-access-bkzks\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn\" (UID: \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.297267 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn\" (UID: \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.297324 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn\" (UID: \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.297357 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkzks\" (UniqueName: \"kubernetes.io/projected/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-kube-api-access-bkzks\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn\" (UID: \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.302132 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn\" (UID: \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.302177 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn\" (UID: \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.314004 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkzks\" (UniqueName: \"kubernetes.io/projected/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-kube-api-access-bkzks\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn\" (UID: \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.315806 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.793969 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn"] Oct 04 05:17:06 crc kubenswrapper[4802]: I1004 05:17:06.942605 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" event={"ID":"3ed52aaa-5f2d-4199-ba3c-41251be41cbd","Type":"ContainerStarted","Data":"75dbdb66efbd0976c7ad448219fc40307aadcca9906515d5dd1b66f5e35807ed"} Oct 04 05:17:08 crc kubenswrapper[4802]: I1004 05:17:08.963843 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" event={"ID":"3ed52aaa-5f2d-4199-ba3c-41251be41cbd","Type":"ContainerStarted","Data":"17ed97b82b07f5c4ff6037e2c27630932aa252f6da49e75b3bbe22e52031a9da"} Oct 04 05:17:08 crc kubenswrapper[4802]: I1004 05:17:08.979008 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" podStartSLOduration=2.6981535819999998 podStartE2EDuration="3.978957315s" podCreationTimestamp="2025-10-04 05:17:05 +0000 UTC" firstStartedPulling="2025-10-04 05:17:06.800137265 +0000 UTC m=+1869.208137900" lastFinishedPulling="2025-10-04 05:17:08.080941008 +0000 UTC m=+1870.488941633" observedRunningTime="2025-10-04 05:17:08.976501415 +0000 UTC m=+1871.384502040" watchObservedRunningTime="2025-10-04 05:17:08.978957315 +0000 UTC m=+1871.386957940" Oct 04 05:17:12 crc kubenswrapper[4802]: I1004 05:17:12.129248 4802 scope.go:117] "RemoveContainer" containerID="1103ac549024c5d9daa0e4a78f8e55ed7c1c3ca17a522f9f4cc2302ac85c2a03" Oct 04 05:17:12 crc kubenswrapper[4802]: I1004 05:17:12.153271 4802 scope.go:117] "RemoveContainer" containerID="8be179e1554dbab44843a37fdf90a6b5ab9e2203351a982850da0f1dcc39e89c" Oct 04 05:17:12 crc kubenswrapper[4802]: I1004 05:17:12.199900 4802 scope.go:117] "RemoveContainer" containerID="c4caa4c3c80f979a3554ec1096134c180378ff66dc729cdedc3e4d48bf1504f9" Oct 04 05:17:12 crc kubenswrapper[4802]: I1004 05:17:12.257619 4802 scope.go:117] "RemoveContainer" containerID="fbed82bfdf0bc8661a39f43bdd5de3db8e188e531dd977066bc7ec2b2a9ebc0f" Oct 04 05:17:12 crc kubenswrapper[4802]: I1004 05:17:12.288834 4802 scope.go:117] "RemoveContainer" containerID="753fbdecf1a003bf9941c7febc9f392a5eff0ecf5e814e75f5dd344530c13edc" Oct 04 05:17:12 crc kubenswrapper[4802]: I1004 05:17:12.334474 4802 scope.go:117] "RemoveContainer" containerID="6c2f299e3422f43f9175be4cff4c4839b14702487e78bf087eace274c9e42cd4" Oct 04 05:17:12 crc kubenswrapper[4802]: I1004 05:17:12.375857 4802 scope.go:117] "RemoveContainer" containerID="cca958026b08c6167a99d0867a1fc5539ceba2ce0a547ae8249131d20069ef45" Oct 04 05:17:12 crc kubenswrapper[4802]: I1004 05:17:12.393816 4802 scope.go:117] "RemoveContainer" containerID="063cb26e28fd4cbafe4288215ae4b857411b65df26a62d8f00036b1713918e2e" Oct 04 05:17:12 crc kubenswrapper[4802]: I1004 05:17:12.426064 4802 scope.go:117] "RemoveContainer" containerID="9686717e6b5c3a23b52989cd4006df86798432aaf61867e64f4aa3c0689052db" Oct 04 05:17:15 crc kubenswrapper[4802]: I1004 05:17:15.035587 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4mftc"] Oct 04 05:17:15 crc kubenswrapper[4802]: I1004 05:17:15.043271 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4mftc"] Oct 04 05:17:16 crc kubenswrapper[4802]: I1004 05:17:16.370311 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f35c8191-842a-419f-8b4a-6f36bd01f6cd" path="/var/lib/kubelet/pods/f35c8191-842a-419f-8b4a-6f36bd01f6cd/volumes" Oct 04 05:17:19 crc kubenswrapper[4802]: I1004 05:17:19.360024 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:17:19 crc kubenswrapper[4802]: E1004 05:17:19.360508 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:17:29 crc kubenswrapper[4802]: I1004 05:17:29.041415 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-wfcjk"] Oct 04 05:17:29 crc kubenswrapper[4802]: I1004 05:17:29.049249 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-g6l87"] Oct 04 05:17:29 crc kubenswrapper[4802]: I1004 05:17:29.057930 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-zxhcq"] Oct 04 05:17:29 crc kubenswrapper[4802]: I1004 05:17:29.064221 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-wfcjk"] Oct 04 05:17:29 crc kubenswrapper[4802]: I1004 05:17:29.070085 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-g6l87"] Oct 04 05:17:29 crc kubenswrapper[4802]: I1004 05:17:29.076274 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-zxhcq"] Oct 04 05:17:30 crc kubenswrapper[4802]: I1004 05:17:30.371378 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="342d0d97-3dda-4dde-b14c-1b7465e68e0b" path="/var/lib/kubelet/pods/342d0d97-3dda-4dde-b14c-1b7465e68e0b/volumes" Oct 04 05:17:30 crc kubenswrapper[4802]: I1004 05:17:30.372274 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd1006b-e193-4f3b-b81f-c0147c185ee5" path="/var/lib/kubelet/pods/4cd1006b-e193-4f3b-b81f-c0147c185ee5/volumes" Oct 04 05:17:30 crc kubenswrapper[4802]: I1004 05:17:30.372823 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6005e25a-ff70-4893-864d-f17cd5715536" path="/var/lib/kubelet/pods/6005e25a-ff70-4893-864d-f17cd5715536/volumes" Oct 04 05:17:31 crc kubenswrapper[4802]: I1004 05:17:31.360355 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:17:31 crc kubenswrapper[4802]: E1004 05:17:31.360750 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:17:38 crc kubenswrapper[4802]: I1004 05:17:38.033204 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-a20f-account-create-6n47p"] Oct 04 05:17:38 crc kubenswrapper[4802]: I1004 05:17:38.041396 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-a20f-account-create-6n47p"] Oct 04 05:17:38 crc kubenswrapper[4802]: I1004 05:17:38.049810 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b8fe-account-create-klqmh"] Oct 04 05:17:38 crc kubenswrapper[4802]: I1004 05:17:38.056181 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b8fe-account-create-klqmh"] Oct 04 05:17:38 crc kubenswrapper[4802]: I1004 05:17:38.372575 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1acdc363-88c6-46f9-b133-f8999c760804" path="/var/lib/kubelet/pods/1acdc363-88c6-46f9-b133-f8999c760804/volumes" Oct 04 05:17:38 crc kubenswrapper[4802]: I1004 05:17:38.373510 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b" path="/var/lib/kubelet/pods/4ee5ae23-6c7a-46d8-a51e-cb2febb3d35b/volumes" Oct 04 05:17:46 crc kubenswrapper[4802]: I1004 05:17:46.360811 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:17:46 crc kubenswrapper[4802]: E1004 05:17:46.361794 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:17:57 crc kubenswrapper[4802]: I1004 05:17:57.359794 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:17:57 crc kubenswrapper[4802]: E1004 05:17:57.360564 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:17:58 crc kubenswrapper[4802]: I1004 05:17:58.042151 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-dbb8-account-create-s4pzt"] Oct 04 05:17:58 crc kubenswrapper[4802]: I1004 05:17:58.051832 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-dbb8-account-create-s4pzt"] Oct 04 05:17:58 crc kubenswrapper[4802]: I1004 05:17:58.391847 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4707af-757c-4df5-935f-8a87a4fcde55" path="/var/lib/kubelet/pods/9d4707af-757c-4df5-935f-8a87a4fcde55/volumes" Oct 04 05:18:01 crc kubenswrapper[4802]: I1004 05:18:01.408477 4802 generic.go:334] "Generic (PLEG): container finished" podID="3ed52aaa-5f2d-4199-ba3c-41251be41cbd" containerID="17ed97b82b07f5c4ff6037e2c27630932aa252f6da49e75b3bbe22e52031a9da" exitCode=2 Oct 04 05:18:01 crc kubenswrapper[4802]: I1004 05:18:01.408671 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" event={"ID":"3ed52aaa-5f2d-4199-ba3c-41251be41cbd","Type":"ContainerDied","Data":"17ed97b82b07f5c4ff6037e2c27630932aa252f6da49e75b3bbe22e52031a9da"} Oct 04 05:18:02 crc kubenswrapper[4802]: I1004 05:18:02.818544 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" Oct 04 05:18:02 crc kubenswrapper[4802]: I1004 05:18:02.869860 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkzks\" (UniqueName: \"kubernetes.io/projected/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-kube-api-access-bkzks\") pod \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\" (UID: \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\") " Oct 04 05:18:02 crc kubenswrapper[4802]: I1004 05:18:02.869906 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-ssh-key\") pod \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\" (UID: \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\") " Oct 04 05:18:02 crc kubenswrapper[4802]: I1004 05:18:02.870042 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-inventory\") pod \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\" (UID: \"3ed52aaa-5f2d-4199-ba3c-41251be41cbd\") " Oct 04 05:18:02 crc kubenswrapper[4802]: I1004 05:18:02.875371 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-kube-api-access-bkzks" (OuterVolumeSpecName: "kube-api-access-bkzks") pod "3ed52aaa-5f2d-4199-ba3c-41251be41cbd" (UID: "3ed52aaa-5f2d-4199-ba3c-41251be41cbd"). InnerVolumeSpecName "kube-api-access-bkzks". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:18:02 crc kubenswrapper[4802]: I1004 05:18:02.895399 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-inventory" (OuterVolumeSpecName: "inventory") pod "3ed52aaa-5f2d-4199-ba3c-41251be41cbd" (UID: "3ed52aaa-5f2d-4199-ba3c-41251be41cbd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:18:02 crc kubenswrapper[4802]: I1004 05:18:02.895804 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3ed52aaa-5f2d-4199-ba3c-41251be41cbd" (UID: "3ed52aaa-5f2d-4199-ba3c-41251be41cbd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:18:02 crc kubenswrapper[4802]: I1004 05:18:02.972325 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:18:02 crc kubenswrapper[4802]: I1004 05:18:02.972372 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkzks\" (UniqueName: \"kubernetes.io/projected/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-kube-api-access-bkzks\") on node \"crc\" DevicePath \"\"" Oct 04 05:18:02 crc kubenswrapper[4802]: I1004 05:18:02.972383 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ed52aaa-5f2d-4199-ba3c-41251be41cbd-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:18:03 crc kubenswrapper[4802]: I1004 05:18:03.429596 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" event={"ID":"3ed52aaa-5f2d-4199-ba3c-41251be41cbd","Type":"ContainerDied","Data":"75dbdb66efbd0976c7ad448219fc40307aadcca9906515d5dd1b66f5e35807ed"} Oct 04 05:18:03 crc kubenswrapper[4802]: I1004 05:18:03.429699 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75dbdb66efbd0976c7ad448219fc40307aadcca9906515d5dd1b66f5e35807ed" Oct 04 05:18:03 crc kubenswrapper[4802]: I1004 05:18:03.429782 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.030530 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5"] Oct 04 05:18:11 crc kubenswrapper[4802]: E1004 05:18:11.031500 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed52aaa-5f2d-4199-ba3c-41251be41cbd" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.031518 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed52aaa-5f2d-4199-ba3c-41251be41cbd" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.031871 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed52aaa-5f2d-4199-ba3c-41251be41cbd" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.032674 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.037570 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.037934 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.045628 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5"] Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.048248 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.051502 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.114665 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ff464f1-683e-45cc-afc6-3e6e6331ee45-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5\" (UID: \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.114804 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpq5h\" (UniqueName: \"kubernetes.io/projected/6ff464f1-683e-45cc-afc6-3e6e6331ee45-kube-api-access-lpq5h\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5\" (UID: \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.114949 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff464f1-683e-45cc-afc6-3e6e6331ee45-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5\" (UID: \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.217259 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff464f1-683e-45cc-afc6-3e6e6331ee45-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5\" (UID: \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.217355 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ff464f1-683e-45cc-afc6-3e6e6331ee45-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5\" (UID: \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.217455 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpq5h\" (UniqueName: \"kubernetes.io/projected/6ff464f1-683e-45cc-afc6-3e6e6331ee45-kube-api-access-lpq5h\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5\" (UID: \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.225601 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff464f1-683e-45cc-afc6-3e6e6331ee45-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5\" (UID: \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.232254 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ff464f1-683e-45cc-afc6-3e6e6331ee45-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5\" (UID: \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.238283 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpq5h\" (UniqueName: \"kubernetes.io/projected/6ff464f1-683e-45cc-afc6-3e6e6331ee45-kube-api-access-lpq5h\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5\" (UID: \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.360806 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:18:11 crc kubenswrapper[4802]: E1004 05:18:11.361120 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.365746 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" Oct 04 05:18:11 crc kubenswrapper[4802]: I1004 05:18:11.994505 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5"] Oct 04 05:18:12 crc kubenswrapper[4802]: W1004 05:18:12.006393 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ff464f1_683e_45cc_afc6_3e6e6331ee45.slice/crio-645f380953e226d7ea3e083e95c29404fd0905a598acd21c93c258753f1f6283 WatchSource:0}: Error finding container 645f380953e226d7ea3e083e95c29404fd0905a598acd21c93c258753f1f6283: Status 404 returned error can't find the container with id 645f380953e226d7ea3e083e95c29404fd0905a598acd21c93c258753f1f6283 Oct 04 05:18:12 crc kubenswrapper[4802]: I1004 05:18:12.515022 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" event={"ID":"6ff464f1-683e-45cc-afc6-3e6e6331ee45","Type":"ContainerStarted","Data":"645f380953e226d7ea3e083e95c29404fd0905a598acd21c93c258753f1f6283"} Oct 04 05:18:12 crc kubenswrapper[4802]: I1004 05:18:12.633926 4802 scope.go:117] "RemoveContainer" containerID="9ea686ffd85d62daa78fed317c37819b25cc6f73417425e9276f30b17eea0d1c" Oct 04 05:18:12 crc kubenswrapper[4802]: I1004 05:18:12.776563 4802 scope.go:117] "RemoveContainer" containerID="a587f60628b5ace4ec60bd2bf5af527d23c93d79e0ccfe1c6755281fac07d999" Oct 04 05:18:12 crc kubenswrapper[4802]: I1004 05:18:12.836671 4802 scope.go:117] "RemoveContainer" containerID="d03f1589783209290cc609faef289c68be5893adda751858f5818d64f25b6204" Oct 04 05:18:12 crc kubenswrapper[4802]: I1004 05:18:12.989079 4802 scope.go:117] "RemoveContainer" containerID="64980ba67382b5c542fc102c6e9f4238a810dc6fc41531cb4cfdf5f6c49d21d8" Oct 04 05:18:13 crc kubenswrapper[4802]: I1004 05:18:13.018782 4802 scope.go:117] "RemoveContainer" containerID="9f6be82a578c052f2e5c92c3fc66bb9bba6ca2bd61c980a2efb7eed5d0a797be" Oct 04 05:18:13 crc kubenswrapper[4802]: I1004 05:18:13.052168 4802 scope.go:117] "RemoveContainer" containerID="ecf3392fab271a57083fe40d0af703db262223467ab728949e080cccc92f71cf" Oct 04 05:18:13 crc kubenswrapper[4802]: I1004 05:18:13.082070 4802 scope.go:117] "RemoveContainer" containerID="e4d187f1cbcaff8b871720ed05eda26a0288b917ddc1af61ef0eb52322bc1c1f" Oct 04 05:18:13 crc kubenswrapper[4802]: I1004 05:18:13.524760 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" event={"ID":"6ff464f1-683e-45cc-afc6-3e6e6331ee45","Type":"ContainerStarted","Data":"9423bdad419be243cf1ba24ac757f2af88db15e83131ff9d1af050feebd5980f"} Oct 04 05:18:13 crc kubenswrapper[4802]: I1004 05:18:13.547814 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" podStartSLOduration=1.7219480919999999 podStartE2EDuration="2.547784311s" podCreationTimestamp="2025-10-04 05:18:11 +0000 UTC" firstStartedPulling="2025-10-04 05:18:12.011228318 +0000 UTC m=+1934.419228943" lastFinishedPulling="2025-10-04 05:18:12.837064537 +0000 UTC m=+1935.245065162" observedRunningTime="2025-10-04 05:18:13.546938336 +0000 UTC m=+1935.954938961" watchObservedRunningTime="2025-10-04 05:18:13.547784311 +0000 UTC m=+1935.955784936" Oct 04 05:18:23 crc kubenswrapper[4802]: I1004 05:18:23.359827 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:18:23 crc kubenswrapper[4802]: E1004 05:18:23.361567 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:18:34 crc kubenswrapper[4802]: I1004 05:18:34.360483 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:18:34 crc kubenswrapper[4802]: E1004 05:18:34.361578 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:18:37 crc kubenswrapper[4802]: I1004 05:18:37.039778 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nxhb5"] Oct 04 05:18:37 crc kubenswrapper[4802]: I1004 05:18:37.046520 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nxhb5"] Oct 04 05:18:38 crc kubenswrapper[4802]: I1004 05:18:38.370986 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d698cfff-f3cf-46c2-9ff2-f1ac8262d5db" path="/var/lib/kubelet/pods/d698cfff-f3cf-46c2-9ff2-f1ac8262d5db/volumes" Oct 04 05:18:45 crc kubenswrapper[4802]: I1004 05:18:45.360009 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:18:45 crc kubenswrapper[4802]: E1004 05:18:45.360933 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:18:55 crc kubenswrapper[4802]: I1004 05:18:55.049789 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-b7dxc"] Oct 04 05:18:55 crc kubenswrapper[4802]: I1004 05:18:55.059839 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-b7dxc"] Oct 04 05:18:56 crc kubenswrapper[4802]: I1004 05:18:56.373377 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac00fcb-556f-496a-85e6-50e1985c617a" path="/var/lib/kubelet/pods/6ac00fcb-556f-496a-85e6-50e1985c617a/volumes" Oct 04 05:18:57 crc kubenswrapper[4802]: I1004 05:18:57.918384 4802 generic.go:334] "Generic (PLEG): container finished" podID="6ff464f1-683e-45cc-afc6-3e6e6331ee45" containerID="9423bdad419be243cf1ba24ac757f2af88db15e83131ff9d1af050feebd5980f" exitCode=0 Oct 04 05:18:57 crc kubenswrapper[4802]: I1004 05:18:57.918711 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" event={"ID":"6ff464f1-683e-45cc-afc6-3e6e6331ee45","Type":"ContainerDied","Data":"9423bdad419be243cf1ba24ac757f2af88db15e83131ff9d1af050feebd5980f"} Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.031945 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7pgm8"] Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.039837 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7pgm8"] Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.359742 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:18:59 crc kubenswrapper[4802]: E1004 05:18:59.360354 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.422367 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.560430 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ff464f1-683e-45cc-afc6-3e6e6331ee45-ssh-key\") pod \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\" (UID: \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\") " Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.560682 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpq5h\" (UniqueName: \"kubernetes.io/projected/6ff464f1-683e-45cc-afc6-3e6e6331ee45-kube-api-access-lpq5h\") pod \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\" (UID: \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\") " Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.560734 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff464f1-683e-45cc-afc6-3e6e6331ee45-inventory\") pod \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\" (UID: \"6ff464f1-683e-45cc-afc6-3e6e6331ee45\") " Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.566546 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff464f1-683e-45cc-afc6-3e6e6331ee45-kube-api-access-lpq5h" (OuterVolumeSpecName: "kube-api-access-lpq5h") pod "6ff464f1-683e-45cc-afc6-3e6e6331ee45" (UID: "6ff464f1-683e-45cc-afc6-3e6e6331ee45"). InnerVolumeSpecName "kube-api-access-lpq5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.591811 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff464f1-683e-45cc-afc6-3e6e6331ee45-inventory" (OuterVolumeSpecName: "inventory") pod "6ff464f1-683e-45cc-afc6-3e6e6331ee45" (UID: "6ff464f1-683e-45cc-afc6-3e6e6331ee45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.600488 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff464f1-683e-45cc-afc6-3e6e6331ee45-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6ff464f1-683e-45cc-afc6-3e6e6331ee45" (UID: "6ff464f1-683e-45cc-afc6-3e6e6331ee45"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.663373 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpq5h\" (UniqueName: \"kubernetes.io/projected/6ff464f1-683e-45cc-afc6-3e6e6331ee45-kube-api-access-lpq5h\") on node \"crc\" DevicePath \"\"" Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.663407 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff464f1-683e-45cc-afc6-3e6e6331ee45-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.663417 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ff464f1-683e-45cc-afc6-3e6e6331ee45-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.938915 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" event={"ID":"6ff464f1-683e-45cc-afc6-3e6e6331ee45","Type":"ContainerDied","Data":"645f380953e226d7ea3e083e95c29404fd0905a598acd21c93c258753f1f6283"} Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.939238 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="645f380953e226d7ea3e083e95c29404fd0905a598acd21c93c258753f1f6283" Oct 04 05:18:59 crc kubenswrapper[4802]: I1004 05:18:59.938988 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.014512 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vz9fg"] Oct 04 05:19:00 crc kubenswrapper[4802]: E1004 05:19:00.015060 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff464f1-683e-45cc-afc6-3e6e6331ee45" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.015087 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff464f1-683e-45cc-afc6-3e6e6331ee45" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.015417 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff464f1-683e-45cc-afc6-3e6e6331ee45" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.016290 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.018268 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.018665 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.018863 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.019034 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.023521 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vz9fg"] Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.171729 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb19d584-fe88-481f-aa23-8cbbe764ddc6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vz9fg\" (UID: \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\") " pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.171940 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr2wz\" (UniqueName: \"kubernetes.io/projected/fb19d584-fe88-481f-aa23-8cbbe764ddc6-kube-api-access-fr2wz\") pod \"ssh-known-hosts-edpm-deployment-vz9fg\" (UID: \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\") " pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.172113 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fb19d584-fe88-481f-aa23-8cbbe764ddc6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vz9fg\" (UID: \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\") " pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.274475 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fb19d584-fe88-481f-aa23-8cbbe764ddc6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vz9fg\" (UID: \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\") " pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.274544 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb19d584-fe88-481f-aa23-8cbbe764ddc6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vz9fg\" (UID: \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\") " pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.274627 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr2wz\" (UniqueName: \"kubernetes.io/projected/fb19d584-fe88-481f-aa23-8cbbe764ddc6-kube-api-access-fr2wz\") pod \"ssh-known-hosts-edpm-deployment-vz9fg\" (UID: \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\") " pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.279330 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fb19d584-fe88-481f-aa23-8cbbe764ddc6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vz9fg\" (UID: \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\") " pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.282280 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb19d584-fe88-481f-aa23-8cbbe764ddc6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vz9fg\" (UID: \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\") " pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.293579 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr2wz\" (UniqueName: \"kubernetes.io/projected/fb19d584-fe88-481f-aa23-8cbbe764ddc6-kube-api-access-fr2wz\") pod \"ssh-known-hosts-edpm-deployment-vz9fg\" (UID: \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\") " pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.332860 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.371230 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9896243-f600-4461-ac5c-e22070c86c51" path="/var/lib/kubelet/pods/a9896243-f600-4461-ac5c-e22070c86c51/volumes" Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.841880 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vz9fg"] Oct 04 05:19:00 crc kubenswrapper[4802]: W1004 05:19:00.852910 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb19d584_fe88_481f_aa23_8cbbe764ddc6.slice/crio-bf82bce6407cc965405bd7412a95a84c37a560d174f73929b0bd8ea9b2e34220 WatchSource:0}: Error finding container bf82bce6407cc965405bd7412a95a84c37a560d174f73929b0bd8ea9b2e34220: Status 404 returned error can't find the container with id bf82bce6407cc965405bd7412a95a84c37a560d174f73929b0bd8ea9b2e34220 Oct 04 05:19:00 crc kubenswrapper[4802]: I1004 05:19:00.950709 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" event={"ID":"fb19d584-fe88-481f-aa23-8cbbe764ddc6","Type":"ContainerStarted","Data":"bf82bce6407cc965405bd7412a95a84c37a560d174f73929b0bd8ea9b2e34220"} Oct 04 05:19:02 crc kubenswrapper[4802]: I1004 05:19:02.968074 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" event={"ID":"fb19d584-fe88-481f-aa23-8cbbe764ddc6","Type":"ContainerStarted","Data":"ba170bf05d39d33f079dc82857da02d53c2019810eee2bb9505fa124ab6c57db"} Oct 04 05:19:02 crc kubenswrapper[4802]: I1004 05:19:02.993130 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" podStartSLOduration=2.83320411 podStartE2EDuration="3.993107665s" podCreationTimestamp="2025-10-04 05:18:59 +0000 UTC" firstStartedPulling="2025-10-04 05:19:00.855983811 +0000 UTC m=+1983.263984446" lastFinishedPulling="2025-10-04 05:19:02.015887376 +0000 UTC m=+1984.423888001" observedRunningTime="2025-10-04 05:19:02.983420229 +0000 UTC m=+1985.391420854" watchObservedRunningTime="2025-10-04 05:19:02.993107665 +0000 UTC m=+1985.401108300" Oct 04 05:19:10 crc kubenswrapper[4802]: I1004 05:19:10.027261 4802 generic.go:334] "Generic (PLEG): container finished" podID="fb19d584-fe88-481f-aa23-8cbbe764ddc6" containerID="ba170bf05d39d33f079dc82857da02d53c2019810eee2bb9505fa124ab6c57db" exitCode=0 Oct 04 05:19:10 crc kubenswrapper[4802]: I1004 05:19:10.027349 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" event={"ID":"fb19d584-fe88-481f-aa23-8cbbe764ddc6","Type":"ContainerDied","Data":"ba170bf05d39d33f079dc82857da02d53c2019810eee2bb9505fa124ab6c57db"} Oct 04 05:19:11 crc kubenswrapper[4802]: I1004 05:19:11.477293 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" Oct 04 05:19:11 crc kubenswrapper[4802]: I1004 05:19:11.596994 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr2wz\" (UniqueName: \"kubernetes.io/projected/fb19d584-fe88-481f-aa23-8cbbe764ddc6-kube-api-access-fr2wz\") pod \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\" (UID: \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\") " Oct 04 05:19:11 crc kubenswrapper[4802]: I1004 05:19:11.597374 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb19d584-fe88-481f-aa23-8cbbe764ddc6-ssh-key-openstack-edpm-ipam\") pod \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\" (UID: \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\") " Oct 04 05:19:11 crc kubenswrapper[4802]: I1004 05:19:11.597431 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fb19d584-fe88-481f-aa23-8cbbe764ddc6-inventory-0\") pod \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\" (UID: \"fb19d584-fe88-481f-aa23-8cbbe764ddc6\") " Oct 04 05:19:11 crc kubenswrapper[4802]: I1004 05:19:11.604903 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb19d584-fe88-481f-aa23-8cbbe764ddc6-kube-api-access-fr2wz" (OuterVolumeSpecName: "kube-api-access-fr2wz") pod "fb19d584-fe88-481f-aa23-8cbbe764ddc6" (UID: "fb19d584-fe88-481f-aa23-8cbbe764ddc6"). InnerVolumeSpecName "kube-api-access-fr2wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:19:11 crc kubenswrapper[4802]: I1004 05:19:11.632016 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb19d584-fe88-481f-aa23-8cbbe764ddc6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fb19d584-fe88-481f-aa23-8cbbe764ddc6" (UID: "fb19d584-fe88-481f-aa23-8cbbe764ddc6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:19:11 crc kubenswrapper[4802]: I1004 05:19:11.633862 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb19d584-fe88-481f-aa23-8cbbe764ddc6-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "fb19d584-fe88-481f-aa23-8cbbe764ddc6" (UID: "fb19d584-fe88-481f-aa23-8cbbe764ddc6"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:19:11 crc kubenswrapper[4802]: I1004 05:19:11.699820 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr2wz\" (UniqueName: \"kubernetes.io/projected/fb19d584-fe88-481f-aa23-8cbbe764ddc6-kube-api-access-fr2wz\") on node \"crc\" DevicePath \"\"" Oct 04 05:19:11 crc kubenswrapper[4802]: I1004 05:19:11.699858 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb19d584-fe88-481f-aa23-8cbbe764ddc6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 04 05:19:11 crc kubenswrapper[4802]: I1004 05:19:11.699868 4802 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fb19d584-fe88-481f-aa23-8cbbe764ddc6-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.059106 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" event={"ID":"fb19d584-fe88-481f-aa23-8cbbe764ddc6","Type":"ContainerDied","Data":"bf82bce6407cc965405bd7412a95a84c37a560d174f73929b0bd8ea9b2e34220"} Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.059146 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf82bce6407cc965405bd7412a95a84c37a560d174f73929b0bd8ea9b2e34220" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.059183 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vz9fg" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.109231 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp"] Oct 04 05:19:12 crc kubenswrapper[4802]: E1004 05:19:12.109669 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb19d584-fe88-481f-aa23-8cbbe764ddc6" containerName="ssh-known-hosts-edpm-deployment" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.109690 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb19d584-fe88-481f-aa23-8cbbe764ddc6" containerName="ssh-known-hosts-edpm-deployment" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.109918 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb19d584-fe88-481f-aa23-8cbbe764ddc6" containerName="ssh-known-hosts-edpm-deployment" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.110508 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.114154 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.114370 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.114753 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.114807 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.121380 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp"] Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.208881 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c3bfd35-a1a2-41e1-a47b-b4a762090644-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dfqtp\" (UID: \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.209188 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjwp\" (UniqueName: \"kubernetes.io/projected/5c3bfd35-a1a2-41e1-a47b-b4a762090644-kube-api-access-kbjwp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dfqtp\" (UID: \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.209530 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c3bfd35-a1a2-41e1-a47b-b4a762090644-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dfqtp\" (UID: \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.310790 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c3bfd35-a1a2-41e1-a47b-b4a762090644-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dfqtp\" (UID: \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.310835 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c3bfd35-a1a2-41e1-a47b-b4a762090644-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dfqtp\" (UID: \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.310895 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbjwp\" (UniqueName: \"kubernetes.io/projected/5c3bfd35-a1a2-41e1-a47b-b4a762090644-kube-api-access-kbjwp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dfqtp\" (UID: \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.314768 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c3bfd35-a1a2-41e1-a47b-b4a762090644-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dfqtp\" (UID: \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.314977 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c3bfd35-a1a2-41e1-a47b-b4a762090644-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dfqtp\" (UID: \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.336466 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbjwp\" (UniqueName: \"kubernetes.io/projected/5c3bfd35-a1a2-41e1-a47b-b4a762090644-kube-api-access-kbjwp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dfqtp\" (UID: \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.360119 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:19:12 crc kubenswrapper[4802]: E1004 05:19:12.360435 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.429184 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" Oct 04 05:19:12 crc kubenswrapper[4802]: I1004 05:19:12.959985 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp"] Oct 04 05:19:13 crc kubenswrapper[4802]: I1004 05:19:13.071678 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" event={"ID":"5c3bfd35-a1a2-41e1-a47b-b4a762090644","Type":"ContainerStarted","Data":"ddd67f681b08134049b1509e43c6a6e98aa765fb7e5de306d00fb145a112623b"} Oct 04 05:19:13 crc kubenswrapper[4802]: I1004 05:19:13.273067 4802 scope.go:117] "RemoveContainer" containerID="bcb4898118c3a26ed76cfd65622fa646515cf666f351c96d8e12e5848c0aa9bb" Oct 04 05:19:13 crc kubenswrapper[4802]: I1004 05:19:13.321519 4802 scope.go:117] "RemoveContainer" containerID="42f1ad68781c477f6ed3ed9eee01618c0deb791f59c951ba4f9302347df49f38" Oct 04 05:19:13 crc kubenswrapper[4802]: I1004 05:19:13.367853 4802 scope.go:117] "RemoveContainer" containerID="4b77a608b32ea901cba6731072f8ddbcf0794fae0851a787033ca9f4505b8808" Oct 04 05:19:15 crc kubenswrapper[4802]: I1004 05:19:15.105557 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" event={"ID":"5c3bfd35-a1a2-41e1-a47b-b4a762090644","Type":"ContainerStarted","Data":"17f8ad48e79c8f80db07955016a510a82d2f6d35c6bfff093a630c00342ef442"} Oct 04 05:19:15 crc kubenswrapper[4802]: I1004 05:19:15.121429 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" podStartSLOduration=2.316986065 podStartE2EDuration="3.121412517s" podCreationTimestamp="2025-10-04 05:19:12 +0000 UTC" firstStartedPulling="2025-10-04 05:19:12.972226119 +0000 UTC m=+1995.380226744" lastFinishedPulling="2025-10-04 05:19:13.776652571 +0000 UTC m=+1996.184653196" observedRunningTime="2025-10-04 05:19:15.118792833 +0000 UTC m=+1997.526793458" watchObservedRunningTime="2025-10-04 05:19:15.121412517 +0000 UTC m=+1997.529413142" Oct 04 05:19:22 crc kubenswrapper[4802]: I1004 05:19:22.166517 4802 generic.go:334] "Generic (PLEG): container finished" podID="5c3bfd35-a1a2-41e1-a47b-b4a762090644" containerID="17f8ad48e79c8f80db07955016a510a82d2f6d35c6bfff093a630c00342ef442" exitCode=0 Oct 04 05:19:22 crc kubenswrapper[4802]: I1004 05:19:22.166592 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" event={"ID":"5c3bfd35-a1a2-41e1-a47b-b4a762090644","Type":"ContainerDied","Data":"17f8ad48e79c8f80db07955016a510a82d2f6d35c6bfff093a630c00342ef442"} Oct 04 05:19:23 crc kubenswrapper[4802]: I1004 05:19:23.565224 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" Oct 04 05:19:23 crc kubenswrapper[4802]: I1004 05:19:23.652855 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c3bfd35-a1a2-41e1-a47b-b4a762090644-inventory\") pod \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\" (UID: \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\") " Oct 04 05:19:23 crc kubenswrapper[4802]: I1004 05:19:23.653033 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c3bfd35-a1a2-41e1-a47b-b4a762090644-ssh-key\") pod \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\" (UID: \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\") " Oct 04 05:19:23 crc kubenswrapper[4802]: I1004 05:19:23.653095 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbjwp\" (UniqueName: \"kubernetes.io/projected/5c3bfd35-a1a2-41e1-a47b-b4a762090644-kube-api-access-kbjwp\") pod \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\" (UID: \"5c3bfd35-a1a2-41e1-a47b-b4a762090644\") " Oct 04 05:19:23 crc kubenswrapper[4802]: I1004 05:19:23.658053 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c3bfd35-a1a2-41e1-a47b-b4a762090644-kube-api-access-kbjwp" (OuterVolumeSpecName: "kube-api-access-kbjwp") pod "5c3bfd35-a1a2-41e1-a47b-b4a762090644" (UID: "5c3bfd35-a1a2-41e1-a47b-b4a762090644"). InnerVolumeSpecName "kube-api-access-kbjwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:19:23 crc kubenswrapper[4802]: I1004 05:19:23.679129 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c3bfd35-a1a2-41e1-a47b-b4a762090644-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5c3bfd35-a1a2-41e1-a47b-b4a762090644" (UID: "5c3bfd35-a1a2-41e1-a47b-b4a762090644"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:19:23 crc kubenswrapper[4802]: I1004 05:19:23.680009 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c3bfd35-a1a2-41e1-a47b-b4a762090644-inventory" (OuterVolumeSpecName: "inventory") pod "5c3bfd35-a1a2-41e1-a47b-b4a762090644" (UID: "5c3bfd35-a1a2-41e1-a47b-b4a762090644"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:19:23 crc kubenswrapper[4802]: I1004 05:19:23.755326 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbjwp\" (UniqueName: \"kubernetes.io/projected/5c3bfd35-a1a2-41e1-a47b-b4a762090644-kube-api-access-kbjwp\") on node \"crc\" DevicePath \"\"" Oct 04 05:19:23 crc kubenswrapper[4802]: I1004 05:19:23.755364 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c3bfd35-a1a2-41e1-a47b-b4a762090644-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:19:23 crc kubenswrapper[4802]: I1004 05:19:23.755377 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c3bfd35-a1a2-41e1-a47b-b4a762090644-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.184934 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" event={"ID":"5c3bfd35-a1a2-41e1-a47b-b4a762090644","Type":"ContainerDied","Data":"ddd67f681b08134049b1509e43c6a6e98aa765fb7e5de306d00fb145a112623b"} Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.185282 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddd67f681b08134049b1509e43c6a6e98aa765fb7e5de306d00fb145a112623b" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.185040 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.277779 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr"] Oct 04 05:19:24 crc kubenswrapper[4802]: E1004 05:19:24.278175 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c3bfd35-a1a2-41e1-a47b-b4a762090644" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.278196 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c3bfd35-a1a2-41e1-a47b-b4a762090644" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.278424 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c3bfd35-a1a2-41e1-a47b-b4a762090644" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.279097 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.281450 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.281978 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.282567 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.284235 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.291368 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr"] Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.365305 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr\" (UID: \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.365362 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdg4\" (UniqueName: \"kubernetes.io/projected/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-kube-api-access-xtdg4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr\" (UID: \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.365898 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr\" (UID: \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.468753 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr\" (UID: \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.468855 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr\" (UID: \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.468895 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtdg4\" (UniqueName: \"kubernetes.io/projected/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-kube-api-access-xtdg4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr\" (UID: \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.474110 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr\" (UID: \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.474159 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr\" (UID: \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.501583 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtdg4\" (UniqueName: \"kubernetes.io/projected/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-kube-api-access-xtdg4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr\" (UID: \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" Oct 04 05:19:24 crc kubenswrapper[4802]: I1004 05:19:24.598280 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" Oct 04 05:19:25 crc kubenswrapper[4802]: I1004 05:19:25.118262 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr"] Oct 04 05:19:25 crc kubenswrapper[4802]: I1004 05:19:25.195155 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" event={"ID":"6c5a61e7-d6f1-4bde-9e45-3145734fd92a","Type":"ContainerStarted","Data":"592503229c81511c60541bc7e6ab7850a90514b73d2c7f2f64b3fecb2c42470e"} Oct 04 05:19:26 crc kubenswrapper[4802]: I1004 05:19:26.207691 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" event={"ID":"6c5a61e7-d6f1-4bde-9e45-3145734fd92a","Type":"ContainerStarted","Data":"a7f0f903bd7da897033688e42fb3e5718eb6b099c2221998280eca50c9d7a35c"} Oct 04 05:19:26 crc kubenswrapper[4802]: I1004 05:19:26.232616 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" podStartSLOduration=1.798851338 podStartE2EDuration="2.232596163s" podCreationTimestamp="2025-10-04 05:19:24 +0000 UTC" firstStartedPulling="2025-10-04 05:19:25.126335496 +0000 UTC m=+2007.534336121" lastFinishedPulling="2025-10-04 05:19:25.560080311 +0000 UTC m=+2007.968080946" observedRunningTime="2025-10-04 05:19:26.224850842 +0000 UTC m=+2008.632851467" watchObservedRunningTime="2025-10-04 05:19:26.232596163 +0000 UTC m=+2008.640596788" Oct 04 05:19:27 crc kubenswrapper[4802]: I1004 05:19:27.359992 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:19:27 crc kubenswrapper[4802]: E1004 05:19:27.361382 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:19:35 crc kubenswrapper[4802]: I1004 05:19:35.287724 4802 generic.go:334] "Generic (PLEG): container finished" podID="6c5a61e7-d6f1-4bde-9e45-3145734fd92a" containerID="a7f0f903bd7da897033688e42fb3e5718eb6b099c2221998280eca50c9d7a35c" exitCode=0 Oct 04 05:19:35 crc kubenswrapper[4802]: I1004 05:19:35.287833 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" event={"ID":"6c5a61e7-d6f1-4bde-9e45-3145734fd92a","Type":"ContainerDied","Data":"a7f0f903bd7da897033688e42fb3e5718eb6b099c2221998280eca50c9d7a35c"} Oct 04 05:19:36 crc kubenswrapper[4802]: I1004 05:19:36.726721 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" Oct 04 05:19:36 crc kubenswrapper[4802]: I1004 05:19:36.796490 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtdg4\" (UniqueName: \"kubernetes.io/projected/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-kube-api-access-xtdg4\") pod \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\" (UID: \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\") " Oct 04 05:19:36 crc kubenswrapper[4802]: I1004 05:19:36.796936 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-inventory\") pod \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\" (UID: \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\") " Oct 04 05:19:36 crc kubenswrapper[4802]: I1004 05:19:36.797026 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-ssh-key\") pod \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\" (UID: \"6c5a61e7-d6f1-4bde-9e45-3145734fd92a\") " Oct 04 05:19:36 crc kubenswrapper[4802]: I1004 05:19:36.806125 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-kube-api-access-xtdg4" (OuterVolumeSpecName: "kube-api-access-xtdg4") pod "6c5a61e7-d6f1-4bde-9e45-3145734fd92a" (UID: "6c5a61e7-d6f1-4bde-9e45-3145734fd92a"). InnerVolumeSpecName "kube-api-access-xtdg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:19:36 crc kubenswrapper[4802]: I1004 05:19:36.823882 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6c5a61e7-d6f1-4bde-9e45-3145734fd92a" (UID: "6c5a61e7-d6f1-4bde-9e45-3145734fd92a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:19:36 crc kubenswrapper[4802]: I1004 05:19:36.825823 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-inventory" (OuterVolumeSpecName: "inventory") pod "6c5a61e7-d6f1-4bde-9e45-3145734fd92a" (UID: "6c5a61e7-d6f1-4bde-9e45-3145734fd92a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:19:36 crc kubenswrapper[4802]: I1004 05:19:36.898937 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:19:36 crc kubenswrapper[4802]: I1004 05:19:36.898982 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtdg4\" (UniqueName: \"kubernetes.io/projected/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-kube-api-access-xtdg4\") on node \"crc\" DevicePath \"\"" Oct 04 05:19:36 crc kubenswrapper[4802]: I1004 05:19:36.899002 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c5a61e7-d6f1-4bde-9e45-3145734fd92a-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:19:37 crc kubenswrapper[4802]: I1004 05:19:37.308092 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" event={"ID":"6c5a61e7-d6f1-4bde-9e45-3145734fd92a","Type":"ContainerDied","Data":"592503229c81511c60541bc7e6ab7850a90514b73d2c7f2f64b3fecb2c42470e"} Oct 04 05:19:37 crc kubenswrapper[4802]: I1004 05:19:37.308146 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="592503229c81511c60541bc7e6ab7850a90514b73d2c7f2f64b3fecb2c42470e" Oct 04 05:19:37 crc kubenswrapper[4802]: I1004 05:19:37.308193 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr" Oct 04 05:19:39 crc kubenswrapper[4802]: I1004 05:19:39.041482 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5xdjz"] Oct 04 05:19:39 crc kubenswrapper[4802]: I1004 05:19:39.048254 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5xdjz"] Oct 04 05:19:40 crc kubenswrapper[4802]: I1004 05:19:40.372712 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcbc1c5a-044d-4c7a-a7b4-79e0da35c045" path="/var/lib/kubelet/pods/fcbc1c5a-044d-4c7a-a7b4-79e0da35c045/volumes" Oct 04 05:19:42 crc kubenswrapper[4802]: I1004 05:19:42.360355 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:19:42 crc kubenswrapper[4802]: E1004 05:19:42.361054 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:19:56 crc kubenswrapper[4802]: I1004 05:19:56.360217 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:19:57 crc kubenswrapper[4802]: I1004 05:19:57.469770 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"908fdeba8f2be8d21d6e188a6dddef471cfd923d4ae6e235cc59265fd970e5c7"} Oct 04 05:20:13 crc kubenswrapper[4802]: I1004 05:20:13.458391 4802 scope.go:117] "RemoveContainer" containerID="ba5d03f70e00c76663d3f1b77d49a4fb2a5bf5543bcf1adcdf8b42b923f86216" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.548668 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pv6b8"] Oct 04 05:20:18 crc kubenswrapper[4802]: E1004 05:20:18.549622 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5a61e7-d6f1-4bde-9e45-3145734fd92a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.549656 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5a61e7-d6f1-4bde-9e45-3145734fd92a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.549852 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c5a61e7-d6f1-4bde-9e45-3145734fd92a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.551256 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.559963 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pv6b8"] Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.654866 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp88r\" (UniqueName: \"kubernetes.io/projected/8cc17f29-6d88-46a6-b721-f3d17999af27-kube-api-access-zp88r\") pod \"redhat-operators-pv6b8\" (UID: \"8cc17f29-6d88-46a6-b721-f3d17999af27\") " pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.654983 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cc17f29-6d88-46a6-b721-f3d17999af27-catalog-content\") pod \"redhat-operators-pv6b8\" (UID: \"8cc17f29-6d88-46a6-b721-f3d17999af27\") " pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.655061 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cc17f29-6d88-46a6-b721-f3d17999af27-utilities\") pod \"redhat-operators-pv6b8\" (UID: \"8cc17f29-6d88-46a6-b721-f3d17999af27\") " pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.756334 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cc17f29-6d88-46a6-b721-f3d17999af27-catalog-content\") pod \"redhat-operators-pv6b8\" (UID: \"8cc17f29-6d88-46a6-b721-f3d17999af27\") " pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.756658 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cc17f29-6d88-46a6-b721-f3d17999af27-utilities\") pod \"redhat-operators-pv6b8\" (UID: \"8cc17f29-6d88-46a6-b721-f3d17999af27\") " pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.756878 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp88r\" (UniqueName: \"kubernetes.io/projected/8cc17f29-6d88-46a6-b721-f3d17999af27-kube-api-access-zp88r\") pod \"redhat-operators-pv6b8\" (UID: \"8cc17f29-6d88-46a6-b721-f3d17999af27\") " pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.757003 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cc17f29-6d88-46a6-b721-f3d17999af27-catalog-content\") pod \"redhat-operators-pv6b8\" (UID: \"8cc17f29-6d88-46a6-b721-f3d17999af27\") " pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.757228 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cc17f29-6d88-46a6-b721-f3d17999af27-utilities\") pod \"redhat-operators-pv6b8\" (UID: \"8cc17f29-6d88-46a6-b721-f3d17999af27\") " pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.780412 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp88r\" (UniqueName: \"kubernetes.io/projected/8cc17f29-6d88-46a6-b721-f3d17999af27-kube-api-access-zp88r\") pod \"redhat-operators-pv6b8\" (UID: \"8cc17f29-6d88-46a6-b721-f3d17999af27\") " pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:18 crc kubenswrapper[4802]: I1004 05:20:18.877181 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:19 crc kubenswrapper[4802]: W1004 05:20:19.344481 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cc17f29_6d88_46a6_b721_f3d17999af27.slice/crio-b924b819a38b94e26d738561469019ea8bd92cac18f5113673f9542cda0563b6 WatchSource:0}: Error finding container b924b819a38b94e26d738561469019ea8bd92cac18f5113673f9542cda0563b6: Status 404 returned error can't find the container with id b924b819a38b94e26d738561469019ea8bd92cac18f5113673f9542cda0563b6 Oct 04 05:20:19 crc kubenswrapper[4802]: I1004 05:20:19.353420 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pv6b8"] Oct 04 05:20:19 crc kubenswrapper[4802]: I1004 05:20:19.671925 4802 generic.go:334] "Generic (PLEG): container finished" podID="8cc17f29-6d88-46a6-b721-f3d17999af27" containerID="850a9948abef6fbb49f0e9207cca3d1604f9872b343ee200643206ee289ed22c" exitCode=0 Oct 04 05:20:19 crc kubenswrapper[4802]: I1004 05:20:19.672073 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pv6b8" event={"ID":"8cc17f29-6d88-46a6-b721-f3d17999af27","Type":"ContainerDied","Data":"850a9948abef6fbb49f0e9207cca3d1604f9872b343ee200643206ee289ed22c"} Oct 04 05:20:19 crc kubenswrapper[4802]: I1004 05:20:19.672283 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pv6b8" event={"ID":"8cc17f29-6d88-46a6-b721-f3d17999af27","Type":"ContainerStarted","Data":"b924b819a38b94e26d738561469019ea8bd92cac18f5113673f9542cda0563b6"} Oct 04 05:20:19 crc kubenswrapper[4802]: I1004 05:20:19.674206 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 05:20:21 crc kubenswrapper[4802]: I1004 05:20:21.690620 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pv6b8" event={"ID":"8cc17f29-6d88-46a6-b721-f3d17999af27","Type":"ContainerStarted","Data":"a735c2e6aa27a01e6e150c174f99c9ef54cfd626deb0404a8dc1470a189050d3"} Oct 04 05:20:22 crc kubenswrapper[4802]: I1004 05:20:22.702999 4802 generic.go:334] "Generic (PLEG): container finished" podID="8cc17f29-6d88-46a6-b721-f3d17999af27" containerID="a735c2e6aa27a01e6e150c174f99c9ef54cfd626deb0404a8dc1470a189050d3" exitCode=0 Oct 04 05:20:22 crc kubenswrapper[4802]: I1004 05:20:22.703112 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pv6b8" event={"ID":"8cc17f29-6d88-46a6-b721-f3d17999af27","Type":"ContainerDied","Data":"a735c2e6aa27a01e6e150c174f99c9ef54cfd626deb0404a8dc1470a189050d3"} Oct 04 05:20:28 crc kubenswrapper[4802]: I1004 05:20:28.760395 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pv6b8" event={"ID":"8cc17f29-6d88-46a6-b721-f3d17999af27","Type":"ContainerStarted","Data":"95c1f8687a8f8f401b22f2112d3edb175708fcdbb2cb540666d72a9befa69c38"} Oct 04 05:20:28 crc kubenswrapper[4802]: I1004 05:20:28.877870 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:28 crc kubenswrapper[4802]: I1004 05:20:28.877923 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:29 crc kubenswrapper[4802]: I1004 05:20:29.920256 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pv6b8" podUID="8cc17f29-6d88-46a6-b721-f3d17999af27" containerName="registry-server" probeResult="failure" output=< Oct 04 05:20:29 crc kubenswrapper[4802]: timeout: failed to connect service ":50051" within 1s Oct 04 05:20:29 crc kubenswrapper[4802]: > Oct 04 05:20:38 crc kubenswrapper[4802]: I1004 05:20:38.929711 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:38 crc kubenswrapper[4802]: I1004 05:20:38.991762 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:38 crc kubenswrapper[4802]: I1004 05:20:38.997242 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pv6b8" podStartSLOduration=12.914830885 podStartE2EDuration="20.996935613s" podCreationTimestamp="2025-10-04 05:20:18 +0000 UTC" firstStartedPulling="2025-10-04 05:20:19.673667747 +0000 UTC m=+2062.081668372" lastFinishedPulling="2025-10-04 05:20:27.755772475 +0000 UTC m=+2070.163773100" observedRunningTime="2025-10-04 05:20:28.785079089 +0000 UTC m=+2071.193079714" watchObservedRunningTime="2025-10-04 05:20:38.996935613 +0000 UTC m=+2081.404936238" Oct 04 05:20:39 crc kubenswrapper[4802]: I1004 05:20:39.163330 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pv6b8"] Oct 04 05:20:40 crc kubenswrapper[4802]: I1004 05:20:40.855632 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pv6b8" podUID="8cc17f29-6d88-46a6-b721-f3d17999af27" containerName="registry-server" containerID="cri-o://95c1f8687a8f8f401b22f2112d3edb175708fcdbb2cb540666d72a9befa69c38" gracePeriod=2 Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.502037 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.663302 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cc17f29-6d88-46a6-b721-f3d17999af27-catalog-content\") pod \"8cc17f29-6d88-46a6-b721-f3d17999af27\" (UID: \"8cc17f29-6d88-46a6-b721-f3d17999af27\") " Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.663428 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cc17f29-6d88-46a6-b721-f3d17999af27-utilities\") pod \"8cc17f29-6d88-46a6-b721-f3d17999af27\" (UID: \"8cc17f29-6d88-46a6-b721-f3d17999af27\") " Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.663577 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp88r\" (UniqueName: \"kubernetes.io/projected/8cc17f29-6d88-46a6-b721-f3d17999af27-kube-api-access-zp88r\") pod \"8cc17f29-6d88-46a6-b721-f3d17999af27\" (UID: \"8cc17f29-6d88-46a6-b721-f3d17999af27\") " Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.664688 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cc17f29-6d88-46a6-b721-f3d17999af27-utilities" (OuterVolumeSpecName: "utilities") pod "8cc17f29-6d88-46a6-b721-f3d17999af27" (UID: "8cc17f29-6d88-46a6-b721-f3d17999af27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.672985 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cc17f29-6d88-46a6-b721-f3d17999af27-kube-api-access-zp88r" (OuterVolumeSpecName: "kube-api-access-zp88r") pod "8cc17f29-6d88-46a6-b721-f3d17999af27" (UID: "8cc17f29-6d88-46a6-b721-f3d17999af27"). InnerVolumeSpecName "kube-api-access-zp88r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.752558 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cc17f29-6d88-46a6-b721-f3d17999af27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cc17f29-6d88-46a6-b721-f3d17999af27" (UID: "8cc17f29-6d88-46a6-b721-f3d17999af27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.765959 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cc17f29-6d88-46a6-b721-f3d17999af27-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.765996 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp88r\" (UniqueName: \"kubernetes.io/projected/8cc17f29-6d88-46a6-b721-f3d17999af27-kube-api-access-zp88r\") on node \"crc\" DevicePath \"\"" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.766009 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cc17f29-6d88-46a6-b721-f3d17999af27-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.866791 4802 generic.go:334] "Generic (PLEG): container finished" podID="8cc17f29-6d88-46a6-b721-f3d17999af27" containerID="95c1f8687a8f8f401b22f2112d3edb175708fcdbb2cb540666d72a9befa69c38" exitCode=0 Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.866835 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pv6b8" event={"ID":"8cc17f29-6d88-46a6-b721-f3d17999af27","Type":"ContainerDied","Data":"95c1f8687a8f8f401b22f2112d3edb175708fcdbb2cb540666d72a9befa69c38"} Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.866879 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pv6b8" event={"ID":"8cc17f29-6d88-46a6-b721-f3d17999af27","Type":"ContainerDied","Data":"b924b819a38b94e26d738561469019ea8bd92cac18f5113673f9542cda0563b6"} Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.866901 4802 scope.go:117] "RemoveContainer" containerID="95c1f8687a8f8f401b22f2112d3edb175708fcdbb2cb540666d72a9befa69c38" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.867484 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pv6b8" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.900810 4802 scope.go:117] "RemoveContainer" containerID="a735c2e6aa27a01e6e150c174f99c9ef54cfd626deb0404a8dc1470a189050d3" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.901772 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pv6b8"] Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.908493 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pv6b8"] Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.922144 4802 scope.go:117] "RemoveContainer" containerID="850a9948abef6fbb49f0e9207cca3d1604f9872b343ee200643206ee289ed22c" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.966886 4802 scope.go:117] "RemoveContainer" containerID="95c1f8687a8f8f401b22f2112d3edb175708fcdbb2cb540666d72a9befa69c38" Oct 04 05:20:41 crc kubenswrapper[4802]: E1004 05:20:41.967483 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95c1f8687a8f8f401b22f2112d3edb175708fcdbb2cb540666d72a9befa69c38\": container with ID starting with 95c1f8687a8f8f401b22f2112d3edb175708fcdbb2cb540666d72a9befa69c38 not found: ID does not exist" containerID="95c1f8687a8f8f401b22f2112d3edb175708fcdbb2cb540666d72a9befa69c38" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.967522 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95c1f8687a8f8f401b22f2112d3edb175708fcdbb2cb540666d72a9befa69c38"} err="failed to get container status \"95c1f8687a8f8f401b22f2112d3edb175708fcdbb2cb540666d72a9befa69c38\": rpc error: code = NotFound desc = could not find container \"95c1f8687a8f8f401b22f2112d3edb175708fcdbb2cb540666d72a9befa69c38\": container with ID starting with 95c1f8687a8f8f401b22f2112d3edb175708fcdbb2cb540666d72a9befa69c38 not found: ID does not exist" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.967546 4802 scope.go:117] "RemoveContainer" containerID="a735c2e6aa27a01e6e150c174f99c9ef54cfd626deb0404a8dc1470a189050d3" Oct 04 05:20:41 crc kubenswrapper[4802]: E1004 05:20:41.968076 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a735c2e6aa27a01e6e150c174f99c9ef54cfd626deb0404a8dc1470a189050d3\": container with ID starting with a735c2e6aa27a01e6e150c174f99c9ef54cfd626deb0404a8dc1470a189050d3 not found: ID does not exist" containerID="a735c2e6aa27a01e6e150c174f99c9ef54cfd626deb0404a8dc1470a189050d3" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.968149 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a735c2e6aa27a01e6e150c174f99c9ef54cfd626deb0404a8dc1470a189050d3"} err="failed to get container status \"a735c2e6aa27a01e6e150c174f99c9ef54cfd626deb0404a8dc1470a189050d3\": rpc error: code = NotFound desc = could not find container \"a735c2e6aa27a01e6e150c174f99c9ef54cfd626deb0404a8dc1470a189050d3\": container with ID starting with a735c2e6aa27a01e6e150c174f99c9ef54cfd626deb0404a8dc1470a189050d3 not found: ID does not exist" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.968195 4802 scope.go:117] "RemoveContainer" containerID="850a9948abef6fbb49f0e9207cca3d1604f9872b343ee200643206ee289ed22c" Oct 04 05:20:41 crc kubenswrapper[4802]: E1004 05:20:41.968704 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"850a9948abef6fbb49f0e9207cca3d1604f9872b343ee200643206ee289ed22c\": container with ID starting with 850a9948abef6fbb49f0e9207cca3d1604f9872b343ee200643206ee289ed22c not found: ID does not exist" containerID="850a9948abef6fbb49f0e9207cca3d1604f9872b343ee200643206ee289ed22c" Oct 04 05:20:41 crc kubenswrapper[4802]: I1004 05:20:41.968735 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"850a9948abef6fbb49f0e9207cca3d1604f9872b343ee200643206ee289ed22c"} err="failed to get container status \"850a9948abef6fbb49f0e9207cca3d1604f9872b343ee200643206ee289ed22c\": rpc error: code = NotFound desc = could not find container \"850a9948abef6fbb49f0e9207cca3d1604f9872b343ee200643206ee289ed22c\": container with ID starting with 850a9948abef6fbb49f0e9207cca3d1604f9872b343ee200643206ee289ed22c not found: ID does not exist" Oct 04 05:20:42 crc kubenswrapper[4802]: I1004 05:20:42.370973 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cc17f29-6d88-46a6-b721-f3d17999af27" path="/var/lib/kubelet/pods/8cc17f29-6d88-46a6-b721-f3d17999af27/volumes" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.684516 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d6f8h"] Oct 04 05:22:21 crc kubenswrapper[4802]: E1004 05:22:21.685497 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc17f29-6d88-46a6-b721-f3d17999af27" containerName="extract-content" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.685513 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc17f29-6d88-46a6-b721-f3d17999af27" containerName="extract-content" Oct 04 05:22:21 crc kubenswrapper[4802]: E1004 05:22:21.685551 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc17f29-6d88-46a6-b721-f3d17999af27" containerName="registry-server" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.685558 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc17f29-6d88-46a6-b721-f3d17999af27" containerName="registry-server" Oct 04 05:22:21 crc kubenswrapper[4802]: E1004 05:22:21.685587 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc17f29-6d88-46a6-b721-f3d17999af27" containerName="extract-utilities" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.685596 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc17f29-6d88-46a6-b721-f3d17999af27" containerName="extract-utilities" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.685836 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cc17f29-6d88-46a6-b721-f3d17999af27" containerName="registry-server" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.687386 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.714263 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d6f8h"] Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.827715 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fafcad8d-0e85-45dd-b813-7036411e0bc8-utilities\") pod \"community-operators-d6f8h\" (UID: \"fafcad8d-0e85-45dd-b813-7036411e0bc8\") " pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.827833 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fafcad8d-0e85-45dd-b813-7036411e0bc8-catalog-content\") pod \"community-operators-d6f8h\" (UID: \"fafcad8d-0e85-45dd-b813-7036411e0bc8\") " pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.827947 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcxrd\" (UniqueName: \"kubernetes.io/projected/fafcad8d-0e85-45dd-b813-7036411e0bc8-kube-api-access-zcxrd\") pod \"community-operators-d6f8h\" (UID: \"fafcad8d-0e85-45dd-b813-7036411e0bc8\") " pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.929810 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fafcad8d-0e85-45dd-b813-7036411e0bc8-utilities\") pod \"community-operators-d6f8h\" (UID: \"fafcad8d-0e85-45dd-b813-7036411e0bc8\") " pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.929910 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fafcad8d-0e85-45dd-b813-7036411e0bc8-catalog-content\") pod \"community-operators-d6f8h\" (UID: \"fafcad8d-0e85-45dd-b813-7036411e0bc8\") " pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.929944 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcxrd\" (UniqueName: \"kubernetes.io/projected/fafcad8d-0e85-45dd-b813-7036411e0bc8-kube-api-access-zcxrd\") pod \"community-operators-d6f8h\" (UID: \"fafcad8d-0e85-45dd-b813-7036411e0bc8\") " pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.930297 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fafcad8d-0e85-45dd-b813-7036411e0bc8-utilities\") pod \"community-operators-d6f8h\" (UID: \"fafcad8d-0e85-45dd-b813-7036411e0bc8\") " pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.930440 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fafcad8d-0e85-45dd-b813-7036411e0bc8-catalog-content\") pod \"community-operators-d6f8h\" (UID: \"fafcad8d-0e85-45dd-b813-7036411e0bc8\") " pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:21 crc kubenswrapper[4802]: I1004 05:22:21.959006 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcxrd\" (UniqueName: \"kubernetes.io/projected/fafcad8d-0e85-45dd-b813-7036411e0bc8-kube-api-access-zcxrd\") pod \"community-operators-d6f8h\" (UID: \"fafcad8d-0e85-45dd-b813-7036411e0bc8\") " pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:22 crc kubenswrapper[4802]: I1004 05:22:22.019376 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:22 crc kubenswrapper[4802]: I1004 05:22:22.554241 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d6f8h"] Oct 04 05:22:22 crc kubenswrapper[4802]: I1004 05:22:22.662179 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:22:22 crc kubenswrapper[4802]: I1004 05:22:22.662250 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:22:22 crc kubenswrapper[4802]: I1004 05:22:22.732372 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6f8h" event={"ID":"fafcad8d-0e85-45dd-b813-7036411e0bc8","Type":"ContainerStarted","Data":"bd375c2aade4cfdacebe42ef17ba8ed6aec910e3e558b97b61d93fd428a9c494"} Oct 04 05:22:23 crc kubenswrapper[4802]: I1004 05:22:23.749542 4802 generic.go:334] "Generic (PLEG): container finished" podID="fafcad8d-0e85-45dd-b813-7036411e0bc8" containerID="b515f742892846af73790dfff228079f078d1b938e681bf24d83dfab715b1315" exitCode=0 Oct 04 05:22:23 crc kubenswrapper[4802]: I1004 05:22:23.749742 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6f8h" event={"ID":"fafcad8d-0e85-45dd-b813-7036411e0bc8","Type":"ContainerDied","Data":"b515f742892846af73790dfff228079f078d1b938e681bf24d83dfab715b1315"} Oct 04 05:22:27 crc kubenswrapper[4802]: I1004 05:22:27.785660 4802 generic.go:334] "Generic (PLEG): container finished" podID="fafcad8d-0e85-45dd-b813-7036411e0bc8" containerID="7f44e6ca7b7e0ef1511016b5f7669299ad548f9d2bb1c17835e3c41cfb5064b2" exitCode=0 Oct 04 05:22:27 crc kubenswrapper[4802]: I1004 05:22:27.785754 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6f8h" event={"ID":"fafcad8d-0e85-45dd-b813-7036411e0bc8","Type":"ContainerDied","Data":"7f44e6ca7b7e0ef1511016b5f7669299ad548f9d2bb1c17835e3c41cfb5064b2"} Oct 04 05:22:29 crc kubenswrapper[4802]: I1004 05:22:29.802535 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6f8h" event={"ID":"fafcad8d-0e85-45dd-b813-7036411e0bc8","Type":"ContainerStarted","Data":"33505507c3ec3a1d326521d74f87ffdc31f1f9efe7692fcfa632a6cff91f2ca3"} Oct 04 05:22:29 crc kubenswrapper[4802]: I1004 05:22:29.823426 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d6f8h" podStartSLOduration=3.972007865 podStartE2EDuration="8.823410662s" podCreationTimestamp="2025-10-04 05:22:21 +0000 UTC" firstStartedPulling="2025-10-04 05:22:23.752373285 +0000 UTC m=+2186.160373910" lastFinishedPulling="2025-10-04 05:22:28.603776082 +0000 UTC m=+2191.011776707" observedRunningTime="2025-10-04 05:22:29.819370865 +0000 UTC m=+2192.227371500" watchObservedRunningTime="2025-10-04 05:22:29.823410662 +0000 UTC m=+2192.231411287" Oct 04 05:22:32 crc kubenswrapper[4802]: I1004 05:22:32.020683 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:32 crc kubenswrapper[4802]: I1004 05:22:32.021028 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:32 crc kubenswrapper[4802]: I1004 05:22:32.066037 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:41 crc kubenswrapper[4802]: I1004 05:22:41.714927 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bpbcs"] Oct 04 05:22:41 crc kubenswrapper[4802]: I1004 05:22:41.717299 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:41 crc kubenswrapper[4802]: I1004 05:22:41.731968 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpbcs"] Oct 04 05:22:41 crc kubenswrapper[4802]: I1004 05:22:41.882100 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vczvp\" (UniqueName: \"kubernetes.io/projected/87c5b1eb-b208-43dc-9829-074aa9aa866a-kube-api-access-vczvp\") pod \"redhat-marketplace-bpbcs\" (UID: \"87c5b1eb-b208-43dc-9829-074aa9aa866a\") " pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:41 crc kubenswrapper[4802]: I1004 05:22:41.882389 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87c5b1eb-b208-43dc-9829-074aa9aa866a-utilities\") pod \"redhat-marketplace-bpbcs\" (UID: \"87c5b1eb-b208-43dc-9829-074aa9aa866a\") " pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:41 crc kubenswrapper[4802]: I1004 05:22:41.882561 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87c5b1eb-b208-43dc-9829-074aa9aa866a-catalog-content\") pod \"redhat-marketplace-bpbcs\" (UID: \"87c5b1eb-b208-43dc-9829-074aa9aa866a\") " pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:41 crc kubenswrapper[4802]: I1004 05:22:41.984066 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87c5b1eb-b208-43dc-9829-074aa9aa866a-utilities\") pod \"redhat-marketplace-bpbcs\" (UID: \"87c5b1eb-b208-43dc-9829-074aa9aa866a\") " pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:41 crc kubenswrapper[4802]: I1004 05:22:41.984168 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87c5b1eb-b208-43dc-9829-074aa9aa866a-catalog-content\") pod \"redhat-marketplace-bpbcs\" (UID: \"87c5b1eb-b208-43dc-9829-074aa9aa866a\") " pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:41 crc kubenswrapper[4802]: I1004 05:22:41.984214 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vczvp\" (UniqueName: \"kubernetes.io/projected/87c5b1eb-b208-43dc-9829-074aa9aa866a-kube-api-access-vczvp\") pod \"redhat-marketplace-bpbcs\" (UID: \"87c5b1eb-b208-43dc-9829-074aa9aa866a\") " pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:41 crc kubenswrapper[4802]: I1004 05:22:41.984932 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87c5b1eb-b208-43dc-9829-074aa9aa866a-utilities\") pod \"redhat-marketplace-bpbcs\" (UID: \"87c5b1eb-b208-43dc-9829-074aa9aa866a\") " pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:41 crc kubenswrapper[4802]: I1004 05:22:41.985183 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87c5b1eb-b208-43dc-9829-074aa9aa866a-catalog-content\") pod \"redhat-marketplace-bpbcs\" (UID: \"87c5b1eb-b208-43dc-9829-074aa9aa866a\") " pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:42 crc kubenswrapper[4802]: I1004 05:22:42.003197 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vczvp\" (UniqueName: \"kubernetes.io/projected/87c5b1eb-b208-43dc-9829-074aa9aa866a-kube-api-access-vczvp\") pod \"redhat-marketplace-bpbcs\" (UID: \"87c5b1eb-b208-43dc-9829-074aa9aa866a\") " pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:42 crc kubenswrapper[4802]: I1004 05:22:42.046363 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:42 crc kubenswrapper[4802]: I1004 05:22:42.073660 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:42 crc kubenswrapper[4802]: I1004 05:22:42.503499 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpbcs"] Oct 04 05:22:42 crc kubenswrapper[4802]: W1004 05:22:42.507263 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87c5b1eb_b208_43dc_9829_074aa9aa866a.slice/crio-acac8416ad13ac61fcab9d18cf0c62058113d6761cd790f6f00b1b1acd6b5fe8 WatchSource:0}: Error finding container acac8416ad13ac61fcab9d18cf0c62058113d6761cd790f6f00b1b1acd6b5fe8: Status 404 returned error can't find the container with id acac8416ad13ac61fcab9d18cf0c62058113d6761cd790f6f00b1b1acd6b5fe8 Oct 04 05:22:42 crc kubenswrapper[4802]: I1004 05:22:42.899840 4802 generic.go:334] "Generic (PLEG): container finished" podID="87c5b1eb-b208-43dc-9829-074aa9aa866a" containerID="d5768454a21b40917e8c9b7908a5ad82899b01a6c10d6d809ff4a76f8554b9e0" exitCode=0 Oct 04 05:22:42 crc kubenswrapper[4802]: I1004 05:22:42.899890 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpbcs" event={"ID":"87c5b1eb-b208-43dc-9829-074aa9aa866a","Type":"ContainerDied","Data":"d5768454a21b40917e8c9b7908a5ad82899b01a6c10d6d809ff4a76f8554b9e0"} Oct 04 05:22:42 crc kubenswrapper[4802]: I1004 05:22:42.900164 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpbcs" event={"ID":"87c5b1eb-b208-43dc-9829-074aa9aa866a","Type":"ContainerStarted","Data":"acac8416ad13ac61fcab9d18cf0c62058113d6761cd790f6f00b1b1acd6b5fe8"} Oct 04 05:22:43 crc kubenswrapper[4802]: I1004 05:22:43.910306 4802 generic.go:334] "Generic (PLEG): container finished" podID="87c5b1eb-b208-43dc-9829-074aa9aa866a" containerID="27b23308b49dbb71b7d727bce19c01508daf8dc27b9c280be08e7fbf1e5acf54" exitCode=0 Oct 04 05:22:43 crc kubenswrapper[4802]: I1004 05:22:43.910380 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpbcs" event={"ID":"87c5b1eb-b208-43dc-9829-074aa9aa866a","Type":"ContainerDied","Data":"27b23308b49dbb71b7d727bce19c01508daf8dc27b9c280be08e7fbf1e5acf54"} Oct 04 05:22:44 crc kubenswrapper[4802]: I1004 05:22:44.497863 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d6f8h"] Oct 04 05:22:44 crc kubenswrapper[4802]: I1004 05:22:44.498131 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d6f8h" podUID="fafcad8d-0e85-45dd-b813-7036411e0bc8" containerName="registry-server" containerID="cri-o://33505507c3ec3a1d326521d74f87ffdc31f1f9efe7692fcfa632a6cff91f2ca3" gracePeriod=2 Oct 04 05:22:44 crc kubenswrapper[4802]: I1004 05:22:44.919484 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:44 crc kubenswrapper[4802]: I1004 05:22:44.922958 4802 generic.go:334] "Generic (PLEG): container finished" podID="fafcad8d-0e85-45dd-b813-7036411e0bc8" containerID="33505507c3ec3a1d326521d74f87ffdc31f1f9efe7692fcfa632a6cff91f2ca3" exitCode=0 Oct 04 05:22:44 crc kubenswrapper[4802]: I1004 05:22:44.923043 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6f8h" event={"ID":"fafcad8d-0e85-45dd-b813-7036411e0bc8","Type":"ContainerDied","Data":"33505507c3ec3a1d326521d74f87ffdc31f1f9efe7692fcfa632a6cff91f2ca3"} Oct 04 05:22:44 crc kubenswrapper[4802]: I1004 05:22:44.923101 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6f8h" event={"ID":"fafcad8d-0e85-45dd-b813-7036411e0bc8","Type":"ContainerDied","Data":"bd375c2aade4cfdacebe42ef17ba8ed6aec910e3e558b97b61d93fd428a9c494"} Oct 04 05:22:44 crc kubenswrapper[4802]: I1004 05:22:44.923124 4802 scope.go:117] "RemoveContainer" containerID="33505507c3ec3a1d326521d74f87ffdc31f1f9efe7692fcfa632a6cff91f2ca3" Oct 04 05:22:44 crc kubenswrapper[4802]: I1004 05:22:44.925687 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpbcs" event={"ID":"87c5b1eb-b208-43dc-9829-074aa9aa866a","Type":"ContainerStarted","Data":"744187c7bf85bda457bfbf80d43f40fa92696da128ad02fdd369cbd51e8cde7e"} Oct 04 05:22:44 crc kubenswrapper[4802]: I1004 05:22:44.951248 4802 scope.go:117] "RemoveContainer" containerID="7f44e6ca7b7e0ef1511016b5f7669299ad548f9d2bb1c17835e3c41cfb5064b2" Oct 04 05:22:44 crc kubenswrapper[4802]: I1004 05:22:44.964751 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bpbcs" podStartSLOduration=2.5382796560000003 podStartE2EDuration="3.964730317s" podCreationTimestamp="2025-10-04 05:22:41 +0000 UTC" firstStartedPulling="2025-10-04 05:22:42.902032073 +0000 UTC m=+2205.310032698" lastFinishedPulling="2025-10-04 05:22:44.328482734 +0000 UTC m=+2206.736483359" observedRunningTime="2025-10-04 05:22:44.960557336 +0000 UTC m=+2207.368557961" watchObservedRunningTime="2025-10-04 05:22:44.964730317 +0000 UTC m=+2207.372730942" Oct 04 05:22:44 crc kubenswrapper[4802]: I1004 05:22:44.984040 4802 scope.go:117] "RemoveContainer" containerID="b515f742892846af73790dfff228079f078d1b938e681bf24d83dfab715b1315" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.025598 4802 scope.go:117] "RemoveContainer" containerID="33505507c3ec3a1d326521d74f87ffdc31f1f9efe7692fcfa632a6cff91f2ca3" Oct 04 05:22:45 crc kubenswrapper[4802]: E1004 05:22:45.026169 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33505507c3ec3a1d326521d74f87ffdc31f1f9efe7692fcfa632a6cff91f2ca3\": container with ID starting with 33505507c3ec3a1d326521d74f87ffdc31f1f9efe7692fcfa632a6cff91f2ca3 not found: ID does not exist" containerID="33505507c3ec3a1d326521d74f87ffdc31f1f9efe7692fcfa632a6cff91f2ca3" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.026220 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33505507c3ec3a1d326521d74f87ffdc31f1f9efe7692fcfa632a6cff91f2ca3"} err="failed to get container status \"33505507c3ec3a1d326521d74f87ffdc31f1f9efe7692fcfa632a6cff91f2ca3\": rpc error: code = NotFound desc = could not find container \"33505507c3ec3a1d326521d74f87ffdc31f1f9efe7692fcfa632a6cff91f2ca3\": container with ID starting with 33505507c3ec3a1d326521d74f87ffdc31f1f9efe7692fcfa632a6cff91f2ca3 not found: ID does not exist" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.026246 4802 scope.go:117] "RemoveContainer" containerID="7f44e6ca7b7e0ef1511016b5f7669299ad548f9d2bb1c17835e3c41cfb5064b2" Oct 04 05:22:45 crc kubenswrapper[4802]: E1004 05:22:45.027104 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f44e6ca7b7e0ef1511016b5f7669299ad548f9d2bb1c17835e3c41cfb5064b2\": container with ID starting with 7f44e6ca7b7e0ef1511016b5f7669299ad548f9d2bb1c17835e3c41cfb5064b2 not found: ID does not exist" containerID="7f44e6ca7b7e0ef1511016b5f7669299ad548f9d2bb1c17835e3c41cfb5064b2" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.027133 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f44e6ca7b7e0ef1511016b5f7669299ad548f9d2bb1c17835e3c41cfb5064b2"} err="failed to get container status \"7f44e6ca7b7e0ef1511016b5f7669299ad548f9d2bb1c17835e3c41cfb5064b2\": rpc error: code = NotFound desc = could not find container \"7f44e6ca7b7e0ef1511016b5f7669299ad548f9d2bb1c17835e3c41cfb5064b2\": container with ID starting with 7f44e6ca7b7e0ef1511016b5f7669299ad548f9d2bb1c17835e3c41cfb5064b2 not found: ID does not exist" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.027153 4802 scope.go:117] "RemoveContainer" containerID="b515f742892846af73790dfff228079f078d1b938e681bf24d83dfab715b1315" Oct 04 05:22:45 crc kubenswrapper[4802]: E1004 05:22:45.027809 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b515f742892846af73790dfff228079f078d1b938e681bf24d83dfab715b1315\": container with ID starting with b515f742892846af73790dfff228079f078d1b938e681bf24d83dfab715b1315 not found: ID does not exist" containerID="b515f742892846af73790dfff228079f078d1b938e681bf24d83dfab715b1315" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.027837 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b515f742892846af73790dfff228079f078d1b938e681bf24d83dfab715b1315"} err="failed to get container status \"b515f742892846af73790dfff228079f078d1b938e681bf24d83dfab715b1315\": rpc error: code = NotFound desc = could not find container \"b515f742892846af73790dfff228079f078d1b938e681bf24d83dfab715b1315\": container with ID starting with b515f742892846af73790dfff228079f078d1b938e681bf24d83dfab715b1315 not found: ID does not exist" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.042342 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcxrd\" (UniqueName: \"kubernetes.io/projected/fafcad8d-0e85-45dd-b813-7036411e0bc8-kube-api-access-zcxrd\") pod \"fafcad8d-0e85-45dd-b813-7036411e0bc8\" (UID: \"fafcad8d-0e85-45dd-b813-7036411e0bc8\") " Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.042425 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fafcad8d-0e85-45dd-b813-7036411e0bc8-catalog-content\") pod \"fafcad8d-0e85-45dd-b813-7036411e0bc8\" (UID: \"fafcad8d-0e85-45dd-b813-7036411e0bc8\") " Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.042460 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fafcad8d-0e85-45dd-b813-7036411e0bc8-utilities\") pod \"fafcad8d-0e85-45dd-b813-7036411e0bc8\" (UID: \"fafcad8d-0e85-45dd-b813-7036411e0bc8\") " Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.043413 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fafcad8d-0e85-45dd-b813-7036411e0bc8-utilities" (OuterVolumeSpecName: "utilities") pod "fafcad8d-0e85-45dd-b813-7036411e0bc8" (UID: "fafcad8d-0e85-45dd-b813-7036411e0bc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.048667 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fafcad8d-0e85-45dd-b813-7036411e0bc8-kube-api-access-zcxrd" (OuterVolumeSpecName: "kube-api-access-zcxrd") pod "fafcad8d-0e85-45dd-b813-7036411e0bc8" (UID: "fafcad8d-0e85-45dd-b813-7036411e0bc8"). InnerVolumeSpecName "kube-api-access-zcxrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.096098 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fafcad8d-0e85-45dd-b813-7036411e0bc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fafcad8d-0e85-45dd-b813-7036411e0bc8" (UID: "fafcad8d-0e85-45dd-b813-7036411e0bc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.144686 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcxrd\" (UniqueName: \"kubernetes.io/projected/fafcad8d-0e85-45dd-b813-7036411e0bc8-kube-api-access-zcxrd\") on node \"crc\" DevicePath \"\"" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.144730 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fafcad8d-0e85-45dd-b813-7036411e0bc8-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.144742 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fafcad8d-0e85-45dd-b813-7036411e0bc8-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.943787 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6f8h" Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.981264 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d6f8h"] Oct 04 05:22:45 crc kubenswrapper[4802]: I1004 05:22:45.990110 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d6f8h"] Oct 04 05:22:46 crc kubenswrapper[4802]: I1004 05:22:46.373287 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fafcad8d-0e85-45dd-b813-7036411e0bc8" path="/var/lib/kubelet/pods/fafcad8d-0e85-45dd-b813-7036411e0bc8/volumes" Oct 04 05:22:52 crc kubenswrapper[4802]: I1004 05:22:52.047543 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:52 crc kubenswrapper[4802]: I1004 05:22:52.048181 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:52 crc kubenswrapper[4802]: I1004 05:22:52.093500 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:52 crc kubenswrapper[4802]: I1004 05:22:52.662679 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:22:52 crc kubenswrapper[4802]: I1004 05:22:52.662750 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:22:53 crc kubenswrapper[4802]: I1004 05:22:53.041259 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:53 crc kubenswrapper[4802]: I1004 05:22:53.093105 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpbcs"] Oct 04 05:22:55 crc kubenswrapper[4802]: I1004 05:22:55.020184 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bpbcs" podUID="87c5b1eb-b208-43dc-9829-074aa9aa866a" containerName="registry-server" containerID="cri-o://744187c7bf85bda457bfbf80d43f40fa92696da128ad02fdd369cbd51e8cde7e" gracePeriod=2 Oct 04 05:22:55 crc kubenswrapper[4802]: I1004 05:22:55.549089 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:55 crc kubenswrapper[4802]: I1004 05:22:55.644150 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87c5b1eb-b208-43dc-9829-074aa9aa866a-catalog-content\") pod \"87c5b1eb-b208-43dc-9829-074aa9aa866a\" (UID: \"87c5b1eb-b208-43dc-9829-074aa9aa866a\") " Oct 04 05:22:55 crc kubenswrapper[4802]: I1004 05:22:55.644308 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vczvp\" (UniqueName: \"kubernetes.io/projected/87c5b1eb-b208-43dc-9829-074aa9aa866a-kube-api-access-vczvp\") pod \"87c5b1eb-b208-43dc-9829-074aa9aa866a\" (UID: \"87c5b1eb-b208-43dc-9829-074aa9aa866a\") " Oct 04 05:22:55 crc kubenswrapper[4802]: I1004 05:22:55.644467 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87c5b1eb-b208-43dc-9829-074aa9aa866a-utilities\") pod \"87c5b1eb-b208-43dc-9829-074aa9aa866a\" (UID: \"87c5b1eb-b208-43dc-9829-074aa9aa866a\") " Oct 04 05:22:55 crc kubenswrapper[4802]: I1004 05:22:55.645270 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87c5b1eb-b208-43dc-9829-074aa9aa866a-utilities" (OuterVolumeSpecName: "utilities") pod "87c5b1eb-b208-43dc-9829-074aa9aa866a" (UID: "87c5b1eb-b208-43dc-9829-074aa9aa866a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:22:55 crc kubenswrapper[4802]: I1004 05:22:55.650122 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c5b1eb-b208-43dc-9829-074aa9aa866a-kube-api-access-vczvp" (OuterVolumeSpecName: "kube-api-access-vczvp") pod "87c5b1eb-b208-43dc-9829-074aa9aa866a" (UID: "87c5b1eb-b208-43dc-9829-074aa9aa866a"). InnerVolumeSpecName "kube-api-access-vczvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:22:55 crc kubenswrapper[4802]: I1004 05:22:55.658437 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87c5b1eb-b208-43dc-9829-074aa9aa866a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87c5b1eb-b208-43dc-9829-074aa9aa866a" (UID: "87c5b1eb-b208-43dc-9829-074aa9aa866a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:22:55 crc kubenswrapper[4802]: I1004 05:22:55.746867 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87c5b1eb-b208-43dc-9829-074aa9aa866a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:22:55 crc kubenswrapper[4802]: I1004 05:22:55.746906 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vczvp\" (UniqueName: \"kubernetes.io/projected/87c5b1eb-b208-43dc-9829-074aa9aa866a-kube-api-access-vczvp\") on node \"crc\" DevicePath \"\"" Oct 04 05:22:55 crc kubenswrapper[4802]: I1004 05:22:55.746920 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87c5b1eb-b208-43dc-9829-074aa9aa866a-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.034899 4802 generic.go:334] "Generic (PLEG): container finished" podID="87c5b1eb-b208-43dc-9829-074aa9aa866a" containerID="744187c7bf85bda457bfbf80d43f40fa92696da128ad02fdd369cbd51e8cde7e" exitCode=0 Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.034951 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpbcs" event={"ID":"87c5b1eb-b208-43dc-9829-074aa9aa866a","Type":"ContainerDied","Data":"744187c7bf85bda457bfbf80d43f40fa92696da128ad02fdd369cbd51e8cde7e"} Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.034986 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpbcs" event={"ID":"87c5b1eb-b208-43dc-9829-074aa9aa866a","Type":"ContainerDied","Data":"acac8416ad13ac61fcab9d18cf0c62058113d6761cd790f6f00b1b1acd6b5fe8"} Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.034978 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpbcs" Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.035075 4802 scope.go:117] "RemoveContainer" containerID="744187c7bf85bda457bfbf80d43f40fa92696da128ad02fdd369cbd51e8cde7e" Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.054876 4802 scope.go:117] "RemoveContainer" containerID="27b23308b49dbb71b7d727bce19c01508daf8dc27b9c280be08e7fbf1e5acf54" Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.068868 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpbcs"] Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.082396 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpbcs"] Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.092152 4802 scope.go:117] "RemoveContainer" containerID="d5768454a21b40917e8c9b7908a5ad82899b01a6c10d6d809ff4a76f8554b9e0" Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.120382 4802 scope.go:117] "RemoveContainer" containerID="744187c7bf85bda457bfbf80d43f40fa92696da128ad02fdd369cbd51e8cde7e" Oct 04 05:22:56 crc kubenswrapper[4802]: E1004 05:22:56.120920 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"744187c7bf85bda457bfbf80d43f40fa92696da128ad02fdd369cbd51e8cde7e\": container with ID starting with 744187c7bf85bda457bfbf80d43f40fa92696da128ad02fdd369cbd51e8cde7e not found: ID does not exist" containerID="744187c7bf85bda457bfbf80d43f40fa92696da128ad02fdd369cbd51e8cde7e" Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.120998 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"744187c7bf85bda457bfbf80d43f40fa92696da128ad02fdd369cbd51e8cde7e"} err="failed to get container status \"744187c7bf85bda457bfbf80d43f40fa92696da128ad02fdd369cbd51e8cde7e\": rpc error: code = NotFound desc = could not find container \"744187c7bf85bda457bfbf80d43f40fa92696da128ad02fdd369cbd51e8cde7e\": container with ID starting with 744187c7bf85bda457bfbf80d43f40fa92696da128ad02fdd369cbd51e8cde7e not found: ID does not exist" Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.121025 4802 scope.go:117] "RemoveContainer" containerID="27b23308b49dbb71b7d727bce19c01508daf8dc27b9c280be08e7fbf1e5acf54" Oct 04 05:22:56 crc kubenswrapper[4802]: E1004 05:22:56.121468 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27b23308b49dbb71b7d727bce19c01508daf8dc27b9c280be08e7fbf1e5acf54\": container with ID starting with 27b23308b49dbb71b7d727bce19c01508daf8dc27b9c280be08e7fbf1e5acf54 not found: ID does not exist" containerID="27b23308b49dbb71b7d727bce19c01508daf8dc27b9c280be08e7fbf1e5acf54" Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.121510 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27b23308b49dbb71b7d727bce19c01508daf8dc27b9c280be08e7fbf1e5acf54"} err="failed to get container status \"27b23308b49dbb71b7d727bce19c01508daf8dc27b9c280be08e7fbf1e5acf54\": rpc error: code = NotFound desc = could not find container \"27b23308b49dbb71b7d727bce19c01508daf8dc27b9c280be08e7fbf1e5acf54\": container with ID starting with 27b23308b49dbb71b7d727bce19c01508daf8dc27b9c280be08e7fbf1e5acf54 not found: ID does not exist" Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.121559 4802 scope.go:117] "RemoveContainer" containerID="d5768454a21b40917e8c9b7908a5ad82899b01a6c10d6d809ff4a76f8554b9e0" Oct 04 05:22:56 crc kubenswrapper[4802]: E1004 05:22:56.121952 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5768454a21b40917e8c9b7908a5ad82899b01a6c10d6d809ff4a76f8554b9e0\": container with ID starting with d5768454a21b40917e8c9b7908a5ad82899b01a6c10d6d809ff4a76f8554b9e0 not found: ID does not exist" containerID="d5768454a21b40917e8c9b7908a5ad82899b01a6c10d6d809ff4a76f8554b9e0" Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.121988 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5768454a21b40917e8c9b7908a5ad82899b01a6c10d6d809ff4a76f8554b9e0"} err="failed to get container status \"d5768454a21b40917e8c9b7908a5ad82899b01a6c10d6d809ff4a76f8554b9e0\": rpc error: code = NotFound desc = could not find container \"d5768454a21b40917e8c9b7908a5ad82899b01a6c10d6d809ff4a76f8554b9e0\": container with ID starting with d5768454a21b40917e8c9b7908a5ad82899b01a6c10d6d809ff4a76f8554b9e0 not found: ID does not exist" Oct 04 05:22:56 crc kubenswrapper[4802]: I1004 05:22:56.371537 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87c5b1eb-b208-43dc-9829-074aa9aa866a" path="/var/lib/kubelet/pods/87c5b1eb-b208-43dc-9829-074aa9aa866a/volumes" Oct 04 05:23:22 crc kubenswrapper[4802]: I1004 05:23:22.663154 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:23:22 crc kubenswrapper[4802]: I1004 05:23:22.663803 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:23:22 crc kubenswrapper[4802]: I1004 05:23:22.663858 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 05:23:22 crc kubenswrapper[4802]: I1004 05:23:22.664688 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"908fdeba8f2be8d21d6e188a6dddef471cfd923d4ae6e235cc59265fd970e5c7"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 05:23:22 crc kubenswrapper[4802]: I1004 05:23:22.664751 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://908fdeba8f2be8d21d6e188a6dddef471cfd923d4ae6e235cc59265fd970e5c7" gracePeriod=600 Oct 04 05:23:23 crc kubenswrapper[4802]: I1004 05:23:23.297132 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="908fdeba8f2be8d21d6e188a6dddef471cfd923d4ae6e235cc59265fd970e5c7" exitCode=0 Oct 04 05:23:23 crc kubenswrapper[4802]: I1004 05:23:23.297326 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"908fdeba8f2be8d21d6e188a6dddef471cfd923d4ae6e235cc59265fd970e5c7"} Oct 04 05:23:23 crc kubenswrapper[4802]: I1004 05:23:23.297437 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b"} Oct 04 05:23:23 crc kubenswrapper[4802]: I1004 05:23:23.297457 4802 scope.go:117] "RemoveContainer" containerID="0bb06102c50b575b7b98e02e2cb2e083199b283b4de16b1fe959c2dec130bfd5" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.263249 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v4zfb"] Oct 04 05:23:38 crc kubenswrapper[4802]: E1004 05:23:38.264189 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafcad8d-0e85-45dd-b813-7036411e0bc8" containerName="extract-content" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.264204 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafcad8d-0e85-45dd-b813-7036411e0bc8" containerName="extract-content" Oct 04 05:23:38 crc kubenswrapper[4802]: E1004 05:23:38.264235 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c5b1eb-b208-43dc-9829-074aa9aa866a" containerName="registry-server" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.264244 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c5b1eb-b208-43dc-9829-074aa9aa866a" containerName="registry-server" Oct 04 05:23:38 crc kubenswrapper[4802]: E1004 05:23:38.264267 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c5b1eb-b208-43dc-9829-074aa9aa866a" containerName="extract-content" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.264276 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c5b1eb-b208-43dc-9829-074aa9aa866a" containerName="extract-content" Oct 04 05:23:38 crc kubenswrapper[4802]: E1004 05:23:38.264287 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafcad8d-0e85-45dd-b813-7036411e0bc8" containerName="extract-utilities" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.264294 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafcad8d-0e85-45dd-b813-7036411e0bc8" containerName="extract-utilities" Oct 04 05:23:38 crc kubenswrapper[4802]: E1004 05:23:38.264307 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c5b1eb-b208-43dc-9829-074aa9aa866a" containerName="extract-utilities" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.264314 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c5b1eb-b208-43dc-9829-074aa9aa866a" containerName="extract-utilities" Oct 04 05:23:38 crc kubenswrapper[4802]: E1004 05:23:38.264331 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafcad8d-0e85-45dd-b813-7036411e0bc8" containerName="registry-server" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.264338 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafcad8d-0e85-45dd-b813-7036411e0bc8" containerName="registry-server" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.264542 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="fafcad8d-0e85-45dd-b813-7036411e0bc8" containerName="registry-server" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.264571 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c5b1eb-b208-43dc-9829-074aa9aa866a" containerName="registry-server" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.266105 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.275706 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v4zfb"] Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.408291 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1312-00b2-4780-8b59-196a21c561d3-utilities\") pod \"certified-operators-v4zfb\" (UID: \"5e6a1312-00b2-4780-8b59-196a21c561d3\") " pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.408371 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbm6g\" (UniqueName: \"kubernetes.io/projected/5e6a1312-00b2-4780-8b59-196a21c561d3-kube-api-access-jbm6g\") pod \"certified-operators-v4zfb\" (UID: \"5e6a1312-00b2-4780-8b59-196a21c561d3\") " pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.408445 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1312-00b2-4780-8b59-196a21c561d3-catalog-content\") pod \"certified-operators-v4zfb\" (UID: \"5e6a1312-00b2-4780-8b59-196a21c561d3\") " pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.509790 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1312-00b2-4780-8b59-196a21c561d3-utilities\") pod \"certified-operators-v4zfb\" (UID: \"5e6a1312-00b2-4780-8b59-196a21c561d3\") " pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.509855 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbm6g\" (UniqueName: \"kubernetes.io/projected/5e6a1312-00b2-4780-8b59-196a21c561d3-kube-api-access-jbm6g\") pod \"certified-operators-v4zfb\" (UID: \"5e6a1312-00b2-4780-8b59-196a21c561d3\") " pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.509912 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1312-00b2-4780-8b59-196a21c561d3-catalog-content\") pod \"certified-operators-v4zfb\" (UID: \"5e6a1312-00b2-4780-8b59-196a21c561d3\") " pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.510317 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1312-00b2-4780-8b59-196a21c561d3-catalog-content\") pod \"certified-operators-v4zfb\" (UID: \"5e6a1312-00b2-4780-8b59-196a21c561d3\") " pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.511143 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1312-00b2-4780-8b59-196a21c561d3-utilities\") pod \"certified-operators-v4zfb\" (UID: \"5e6a1312-00b2-4780-8b59-196a21c561d3\") " pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.544975 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbm6g\" (UniqueName: \"kubernetes.io/projected/5e6a1312-00b2-4780-8b59-196a21c561d3-kube-api-access-jbm6g\") pod \"certified-operators-v4zfb\" (UID: \"5e6a1312-00b2-4780-8b59-196a21c561d3\") " pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:38 crc kubenswrapper[4802]: I1004 05:23:38.587525 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:39 crc kubenswrapper[4802]: I1004 05:23:39.108700 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v4zfb"] Oct 04 05:23:39 crc kubenswrapper[4802]: I1004 05:23:39.483521 4802 generic.go:334] "Generic (PLEG): container finished" podID="5e6a1312-00b2-4780-8b59-196a21c561d3" containerID="a641ac65906f3aae94757951518bcc7cba7b12f30f38591bc4622901027e7160" exitCode=0 Oct 04 05:23:39 crc kubenswrapper[4802]: I1004 05:23:39.483565 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4zfb" event={"ID":"5e6a1312-00b2-4780-8b59-196a21c561d3","Type":"ContainerDied","Data":"a641ac65906f3aae94757951518bcc7cba7b12f30f38591bc4622901027e7160"} Oct 04 05:23:39 crc kubenswrapper[4802]: I1004 05:23:39.483589 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4zfb" event={"ID":"5e6a1312-00b2-4780-8b59-196a21c561d3","Type":"ContainerStarted","Data":"a38040d8fed71068fc067dd6ad148500867d200490a5d8465df4bf33ed23563b"} Oct 04 05:23:41 crc kubenswrapper[4802]: I1004 05:23:41.511970 4802 generic.go:334] "Generic (PLEG): container finished" podID="5e6a1312-00b2-4780-8b59-196a21c561d3" containerID="cfe09526ff49a98cc4dceb516730fc660a91719bbf4ff9d6b91a693dca42a3aa" exitCode=0 Oct 04 05:23:41 crc kubenswrapper[4802]: I1004 05:23:41.512039 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4zfb" event={"ID":"5e6a1312-00b2-4780-8b59-196a21c561d3","Type":"ContainerDied","Data":"cfe09526ff49a98cc4dceb516730fc660a91719bbf4ff9d6b91a693dca42a3aa"} Oct 04 05:23:42 crc kubenswrapper[4802]: I1004 05:23:42.522868 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4zfb" event={"ID":"5e6a1312-00b2-4780-8b59-196a21c561d3","Type":"ContainerStarted","Data":"aefcd5f22396bf5b28fa9c33ea425090dd78e0ae87159606482de02083a629b6"} Oct 04 05:23:42 crc kubenswrapper[4802]: I1004 05:23:42.545966 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v4zfb" podStartSLOduration=1.846578247 podStartE2EDuration="4.545945866s" podCreationTimestamp="2025-10-04 05:23:38 +0000 UTC" firstStartedPulling="2025-10-04 05:23:39.485674104 +0000 UTC m=+2261.893674729" lastFinishedPulling="2025-10-04 05:23:42.185041723 +0000 UTC m=+2264.593042348" observedRunningTime="2025-10-04 05:23:42.54469865 +0000 UTC m=+2264.952699295" watchObservedRunningTime="2025-10-04 05:23:42.545945866 +0000 UTC m=+2264.953946491" Oct 04 05:23:48 crc kubenswrapper[4802]: I1004 05:23:48.588870 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:48 crc kubenswrapper[4802]: I1004 05:23:48.589444 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:48 crc kubenswrapper[4802]: I1004 05:23:48.657420 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:49 crc kubenswrapper[4802]: I1004 05:23:49.632774 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:49 crc kubenswrapper[4802]: I1004 05:23:49.673698 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v4zfb"] Oct 04 05:23:51 crc kubenswrapper[4802]: I1004 05:23:51.601058 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v4zfb" podUID="5e6a1312-00b2-4780-8b59-196a21c561d3" containerName="registry-server" containerID="cri-o://aefcd5f22396bf5b28fa9c33ea425090dd78e0ae87159606482de02083a629b6" gracePeriod=2 Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.017386 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.165908 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1312-00b2-4780-8b59-196a21c561d3-catalog-content\") pod \"5e6a1312-00b2-4780-8b59-196a21c561d3\" (UID: \"5e6a1312-00b2-4780-8b59-196a21c561d3\") " Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.166346 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1312-00b2-4780-8b59-196a21c561d3-utilities\") pod \"5e6a1312-00b2-4780-8b59-196a21c561d3\" (UID: \"5e6a1312-00b2-4780-8b59-196a21c561d3\") " Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.166510 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbm6g\" (UniqueName: \"kubernetes.io/projected/5e6a1312-00b2-4780-8b59-196a21c561d3-kube-api-access-jbm6g\") pod \"5e6a1312-00b2-4780-8b59-196a21c561d3\" (UID: \"5e6a1312-00b2-4780-8b59-196a21c561d3\") " Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.167102 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e6a1312-00b2-4780-8b59-196a21c561d3-utilities" (OuterVolumeSpecName: "utilities") pod "5e6a1312-00b2-4780-8b59-196a21c561d3" (UID: "5e6a1312-00b2-4780-8b59-196a21c561d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.173011 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6a1312-00b2-4780-8b59-196a21c561d3-kube-api-access-jbm6g" (OuterVolumeSpecName: "kube-api-access-jbm6g") pod "5e6a1312-00b2-4780-8b59-196a21c561d3" (UID: "5e6a1312-00b2-4780-8b59-196a21c561d3"). InnerVolumeSpecName "kube-api-access-jbm6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.268022 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1312-00b2-4780-8b59-196a21c561d3-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.268059 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbm6g\" (UniqueName: \"kubernetes.io/projected/5e6a1312-00b2-4780-8b59-196a21c561d3-kube-api-access-jbm6g\") on node \"crc\" DevicePath \"\"" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.611582 4802 generic.go:334] "Generic (PLEG): container finished" podID="5e6a1312-00b2-4780-8b59-196a21c561d3" containerID="aefcd5f22396bf5b28fa9c33ea425090dd78e0ae87159606482de02083a629b6" exitCode=0 Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.611634 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4zfb" event={"ID":"5e6a1312-00b2-4780-8b59-196a21c561d3","Type":"ContainerDied","Data":"aefcd5f22396bf5b28fa9c33ea425090dd78e0ae87159606482de02083a629b6"} Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.611708 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4zfb" event={"ID":"5e6a1312-00b2-4780-8b59-196a21c561d3","Type":"ContainerDied","Data":"a38040d8fed71068fc067dd6ad148500867d200490a5d8465df4bf33ed23563b"} Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.611731 4802 scope.go:117] "RemoveContainer" containerID="aefcd5f22396bf5b28fa9c33ea425090dd78e0ae87159606482de02083a629b6" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.612623 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v4zfb" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.632197 4802 scope.go:117] "RemoveContainer" containerID="cfe09526ff49a98cc4dceb516730fc660a91719bbf4ff9d6b91a693dca42a3aa" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.654717 4802 scope.go:117] "RemoveContainer" containerID="a641ac65906f3aae94757951518bcc7cba7b12f30f38591bc4622901027e7160" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.699501 4802 scope.go:117] "RemoveContainer" containerID="aefcd5f22396bf5b28fa9c33ea425090dd78e0ae87159606482de02083a629b6" Oct 04 05:23:52 crc kubenswrapper[4802]: E1004 05:23:52.700473 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aefcd5f22396bf5b28fa9c33ea425090dd78e0ae87159606482de02083a629b6\": container with ID starting with aefcd5f22396bf5b28fa9c33ea425090dd78e0ae87159606482de02083a629b6 not found: ID does not exist" containerID="aefcd5f22396bf5b28fa9c33ea425090dd78e0ae87159606482de02083a629b6" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.700539 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aefcd5f22396bf5b28fa9c33ea425090dd78e0ae87159606482de02083a629b6"} err="failed to get container status \"aefcd5f22396bf5b28fa9c33ea425090dd78e0ae87159606482de02083a629b6\": rpc error: code = NotFound desc = could not find container \"aefcd5f22396bf5b28fa9c33ea425090dd78e0ae87159606482de02083a629b6\": container with ID starting with aefcd5f22396bf5b28fa9c33ea425090dd78e0ae87159606482de02083a629b6 not found: ID does not exist" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.700625 4802 scope.go:117] "RemoveContainer" containerID="cfe09526ff49a98cc4dceb516730fc660a91719bbf4ff9d6b91a693dca42a3aa" Oct 04 05:23:52 crc kubenswrapper[4802]: E1004 05:23:52.701050 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfe09526ff49a98cc4dceb516730fc660a91719bbf4ff9d6b91a693dca42a3aa\": container with ID starting with cfe09526ff49a98cc4dceb516730fc660a91719bbf4ff9d6b91a693dca42a3aa not found: ID does not exist" containerID="cfe09526ff49a98cc4dceb516730fc660a91719bbf4ff9d6b91a693dca42a3aa" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.701105 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfe09526ff49a98cc4dceb516730fc660a91719bbf4ff9d6b91a693dca42a3aa"} err="failed to get container status \"cfe09526ff49a98cc4dceb516730fc660a91719bbf4ff9d6b91a693dca42a3aa\": rpc error: code = NotFound desc = could not find container \"cfe09526ff49a98cc4dceb516730fc660a91719bbf4ff9d6b91a693dca42a3aa\": container with ID starting with cfe09526ff49a98cc4dceb516730fc660a91719bbf4ff9d6b91a693dca42a3aa not found: ID does not exist" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.701139 4802 scope.go:117] "RemoveContainer" containerID="a641ac65906f3aae94757951518bcc7cba7b12f30f38591bc4622901027e7160" Oct 04 05:23:52 crc kubenswrapper[4802]: E1004 05:23:52.701733 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a641ac65906f3aae94757951518bcc7cba7b12f30f38591bc4622901027e7160\": container with ID starting with a641ac65906f3aae94757951518bcc7cba7b12f30f38591bc4622901027e7160 not found: ID does not exist" containerID="a641ac65906f3aae94757951518bcc7cba7b12f30f38591bc4622901027e7160" Oct 04 05:23:52 crc kubenswrapper[4802]: I1004 05:23:52.701759 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a641ac65906f3aae94757951518bcc7cba7b12f30f38591bc4622901027e7160"} err="failed to get container status \"a641ac65906f3aae94757951518bcc7cba7b12f30f38591bc4622901027e7160\": rpc error: code = NotFound desc = could not find container \"a641ac65906f3aae94757951518bcc7cba7b12f30f38591bc4622901027e7160\": container with ID starting with a641ac65906f3aae94757951518bcc7cba7b12f30f38591bc4622901027e7160 not found: ID does not exist" Oct 04 05:23:53 crc kubenswrapper[4802]: I1004 05:23:53.071094 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e6a1312-00b2-4780-8b59-196a21c561d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e6a1312-00b2-4780-8b59-196a21c561d3" (UID: "5e6a1312-00b2-4780-8b59-196a21c561d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:23:53 crc kubenswrapper[4802]: I1004 05:23:53.082259 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1312-00b2-4780-8b59-196a21c561d3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:23:53 crc kubenswrapper[4802]: I1004 05:23:53.253594 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v4zfb"] Oct 04 05:23:53 crc kubenswrapper[4802]: I1004 05:23:53.263303 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v4zfb"] Oct 04 05:23:54 crc kubenswrapper[4802]: I1004 05:23:54.371042 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6a1312-00b2-4780-8b59-196a21c561d3" path="/var/lib/kubelet/pods/5e6a1312-00b2-4780-8b59-196a21c561d3/volumes" Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.200739 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.208824 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dfqtp"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.217898 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.226328 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6p6vr"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.232688 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.239338 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.245881 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.253102 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.262261 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.270319 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.277888 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.284885 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.291506 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vz9fg"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.298022 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-j4xt5"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.304764 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2cgh"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.311828 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-58cwv"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.318399 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5tfln"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.324140 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kvrxn"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.331693 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ppzjl"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.338922 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hm9pq"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.345764 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2b7dc"] Oct 04 05:24:59 crc kubenswrapper[4802]: I1004 05:24:59.352985 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vz9fg"] Oct 04 05:25:00 crc kubenswrapper[4802]: I1004 05:25:00.369061 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed52aaa-5f2d-4199-ba3c-41251be41cbd" path="/var/lib/kubelet/pods/3ed52aaa-5f2d-4199-ba3c-41251be41cbd/volumes" Oct 04 05:25:00 crc kubenswrapper[4802]: I1004 05:25:00.369583 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b35f0d-4354-452d-96d8-4079505bc44b" path="/var/lib/kubelet/pods/52b35f0d-4354-452d-96d8-4079505bc44b/volumes" Oct 04 05:25:00 crc kubenswrapper[4802]: I1004 05:25:00.370100 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5635be3d-08e4-4fd2-b3e4-488dda21dce7" path="/var/lib/kubelet/pods/5635be3d-08e4-4fd2-b3e4-488dda21dce7/volumes" Oct 04 05:25:00 crc kubenswrapper[4802]: I1004 05:25:00.370670 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c3bfd35-a1a2-41e1-a47b-b4a762090644" path="/var/lib/kubelet/pods/5c3bfd35-a1a2-41e1-a47b-b4a762090644/volumes" Oct 04 05:25:00 crc kubenswrapper[4802]: I1004 05:25:00.371794 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c5a61e7-d6f1-4bde-9e45-3145734fd92a" path="/var/lib/kubelet/pods/6c5a61e7-d6f1-4bde-9e45-3145734fd92a/volumes" Oct 04 05:25:00 crc kubenswrapper[4802]: I1004 05:25:00.372269 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ff464f1-683e-45cc-afc6-3e6e6331ee45" path="/var/lib/kubelet/pods/6ff464f1-683e-45cc-afc6-3e6e6331ee45/volumes" Oct 04 05:25:00 crc kubenswrapper[4802]: I1004 05:25:00.372757 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6f8085b-5435-44e5-ad0d-d189e218f138" path="/var/lib/kubelet/pods/b6f8085b-5435-44e5-ad0d-d189e218f138/volumes" Oct 04 05:25:00 crc kubenswrapper[4802]: I1004 05:25:00.373693 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6992f33-4605-433b-a5c3-6b227ce6cfd2" path="/var/lib/kubelet/pods/e6992f33-4605-433b-a5c3-6b227ce6cfd2/volumes" Oct 04 05:25:00 crc kubenswrapper[4802]: I1004 05:25:00.374161 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb608de4-6c70-4d9b-8c71-dcb8e1cd9132" path="/var/lib/kubelet/pods/eb608de4-6c70-4d9b-8c71-dcb8e1cd9132/volumes" Oct 04 05:25:00 crc kubenswrapper[4802]: I1004 05:25:00.374788 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a8891c-7f32-4c7e-8f71-8f359dd8de14" path="/var/lib/kubelet/pods/f7a8891c-7f32-4c7e-8f71-8f359dd8de14/volumes" Oct 04 05:25:00 crc kubenswrapper[4802]: I1004 05:25:00.375804 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb19d584-fe88-481f-aa23-8cbbe764ddc6" path="/var/lib/kubelet/pods/fb19d584-fe88-481f-aa23-8cbbe764ddc6/volumes" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.125967 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5"] Oct 04 05:25:05 crc kubenswrapper[4802]: E1004 05:25:05.127504 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6a1312-00b2-4780-8b59-196a21c561d3" containerName="registry-server" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.127553 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6a1312-00b2-4780-8b59-196a21c561d3" containerName="registry-server" Oct 04 05:25:05 crc kubenswrapper[4802]: E1004 05:25:05.127578 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6a1312-00b2-4780-8b59-196a21c561d3" containerName="extract-content" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.127584 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6a1312-00b2-4780-8b59-196a21c561d3" containerName="extract-content" Oct 04 05:25:05 crc kubenswrapper[4802]: E1004 05:25:05.127615 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6a1312-00b2-4780-8b59-196a21c561d3" containerName="extract-utilities" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.127624 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6a1312-00b2-4780-8b59-196a21c561d3" containerName="extract-utilities" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.127918 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6a1312-00b2-4780-8b59-196a21c561d3" containerName="registry-server" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.129063 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.132034 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.133119 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.133484 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.133717 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.134508 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.148419 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5"] Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.306418 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.306480 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vddwn\" (UniqueName: \"kubernetes.io/projected/1013ab05-0b6e-458d-b876-e7bb43cbd153-kube-api-access-vddwn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.306531 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.306561 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.306583 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.408214 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.408307 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.408356 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.408733 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.409265 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vddwn\" (UniqueName: \"kubernetes.io/projected/1013ab05-0b6e-458d-b876-e7bb43cbd153-kube-api-access-vddwn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.414962 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.415046 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.417369 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.417858 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.426220 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vddwn\" (UniqueName: \"kubernetes.io/projected/1013ab05-0b6e-458d-b876-e7bb43cbd153-kube-api-access-vddwn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.457315 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:05 crc kubenswrapper[4802]: I1004 05:25:05.967506 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5"] Oct 04 05:25:06 crc kubenswrapper[4802]: I1004 05:25:06.229877 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" event={"ID":"1013ab05-0b6e-458d-b876-e7bb43cbd153","Type":"ContainerStarted","Data":"11f31e029fcb2d65e6ff2dbd160004c47109aa6f5b4de5d8001aeed195195878"} Oct 04 05:25:08 crc kubenswrapper[4802]: I1004 05:25:08.248680 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" event={"ID":"1013ab05-0b6e-458d-b876-e7bb43cbd153","Type":"ContainerStarted","Data":"0ea46912810eda800a71351c24fabe8f939730f5ab1063dc6c9a1c11b43e626e"} Oct 04 05:25:08 crc kubenswrapper[4802]: I1004 05:25:08.268745 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" podStartSLOduration=2.040478123 podStartE2EDuration="3.268721854s" podCreationTimestamp="2025-10-04 05:25:05 +0000 UTC" firstStartedPulling="2025-10-04 05:25:05.975751297 +0000 UTC m=+2348.383751922" lastFinishedPulling="2025-10-04 05:25:07.203995038 +0000 UTC m=+2349.611995653" observedRunningTime="2025-10-04 05:25:08.263585905 +0000 UTC m=+2350.671586550" watchObservedRunningTime="2025-10-04 05:25:08.268721854 +0000 UTC m=+2350.676722479" Oct 04 05:25:13 crc kubenswrapper[4802]: I1004 05:25:13.692240 4802 scope.go:117] "RemoveContainer" containerID="9423bdad419be243cf1ba24ac757f2af88db15e83131ff9d1af050feebd5980f" Oct 04 05:25:13 crc kubenswrapper[4802]: I1004 05:25:13.748528 4802 scope.go:117] "RemoveContainer" containerID="ba170bf05d39d33f079dc82857da02d53c2019810eee2bb9505fa124ab6c57db" Oct 04 05:25:13 crc kubenswrapper[4802]: I1004 05:25:13.783679 4802 scope.go:117] "RemoveContainer" containerID="2a5bf269b62e57ff07c6181fc1babb95026c83a0a58cc50335d1a5701d9a1b70" Oct 04 05:25:13 crc kubenswrapper[4802]: I1004 05:25:13.821734 4802 scope.go:117] "RemoveContainer" containerID="a5a9cde035062909bec6c4f4e0f7a058b0f562576e9321948bd5f12627c52f51" Oct 04 05:25:13 crc kubenswrapper[4802]: I1004 05:25:13.891974 4802 scope.go:117] "RemoveContainer" containerID="6ae4de67f9b6c15dd8f3e64f74e36474c1c6854bed9e5f615557e1cae9058120" Oct 04 05:25:13 crc kubenswrapper[4802]: I1004 05:25:13.961732 4802 scope.go:117] "RemoveContainer" containerID="ec3d486ccc9246d6c5f55b26f18cbb62caed517d7e1299a83099b7b8a4001c5f" Oct 04 05:25:13 crc kubenswrapper[4802]: I1004 05:25:13.985984 4802 scope.go:117] "RemoveContainer" containerID="17ed97b82b07f5c4ff6037e2c27630932aa252f6da49e75b3bbe22e52031a9da" Oct 04 05:25:14 crc kubenswrapper[4802]: I1004 05:25:14.024572 4802 scope.go:117] "RemoveContainer" containerID="a28c9fb77eabba6ad40293f646e33170f2afcfb43e38d2ae1ae01444d005ccab" Oct 04 05:25:14 crc kubenswrapper[4802]: I1004 05:25:14.069864 4802 scope.go:117] "RemoveContainer" containerID="d774fd189a818c341f413dbfdd5a2ccb1087e9ab98fc76fd3ed1f1bacf0fe5cb" Oct 04 05:25:22 crc kubenswrapper[4802]: I1004 05:25:22.662862 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:25:22 crc kubenswrapper[4802]: I1004 05:25:22.663618 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:25:28 crc kubenswrapper[4802]: I1004 05:25:28.425883 4802 generic.go:334] "Generic (PLEG): container finished" podID="1013ab05-0b6e-458d-b876-e7bb43cbd153" containerID="0ea46912810eda800a71351c24fabe8f939730f5ab1063dc6c9a1c11b43e626e" exitCode=0 Oct 04 05:25:28 crc kubenswrapper[4802]: I1004 05:25:28.425934 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" event={"ID":"1013ab05-0b6e-458d-b876-e7bb43cbd153","Type":"ContainerDied","Data":"0ea46912810eda800a71351c24fabe8f939730f5ab1063dc6c9a1c11b43e626e"} Oct 04 05:25:29 crc kubenswrapper[4802]: I1004 05:25:29.907598 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.067501 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vddwn\" (UniqueName: \"kubernetes.io/projected/1013ab05-0b6e-458d-b876-e7bb43cbd153-kube-api-access-vddwn\") pod \"1013ab05-0b6e-458d-b876-e7bb43cbd153\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.067561 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-repo-setup-combined-ca-bundle\") pod \"1013ab05-0b6e-458d-b876-e7bb43cbd153\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.067599 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-ceph\") pod \"1013ab05-0b6e-458d-b876-e7bb43cbd153\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.067795 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-ssh-key\") pod \"1013ab05-0b6e-458d-b876-e7bb43cbd153\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.067831 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-inventory\") pod \"1013ab05-0b6e-458d-b876-e7bb43cbd153\" (UID: \"1013ab05-0b6e-458d-b876-e7bb43cbd153\") " Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.074850 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-ceph" (OuterVolumeSpecName: "ceph") pod "1013ab05-0b6e-458d-b876-e7bb43cbd153" (UID: "1013ab05-0b6e-458d-b876-e7bb43cbd153"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.074873 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1013ab05-0b6e-458d-b876-e7bb43cbd153-kube-api-access-vddwn" (OuterVolumeSpecName: "kube-api-access-vddwn") pod "1013ab05-0b6e-458d-b876-e7bb43cbd153" (UID: "1013ab05-0b6e-458d-b876-e7bb43cbd153"). InnerVolumeSpecName "kube-api-access-vddwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.075576 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1013ab05-0b6e-458d-b876-e7bb43cbd153" (UID: "1013ab05-0b6e-458d-b876-e7bb43cbd153"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.096839 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1013ab05-0b6e-458d-b876-e7bb43cbd153" (UID: "1013ab05-0b6e-458d-b876-e7bb43cbd153"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.099018 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-inventory" (OuterVolumeSpecName: "inventory") pod "1013ab05-0b6e-458d-b876-e7bb43cbd153" (UID: "1013ab05-0b6e-458d-b876-e7bb43cbd153"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.170295 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.170341 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.170358 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vddwn\" (UniqueName: \"kubernetes.io/projected/1013ab05-0b6e-458d-b876-e7bb43cbd153-kube-api-access-vddwn\") on node \"crc\" DevicePath \"\"" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.170371 4802 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.170385 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1013ab05-0b6e-458d-b876-e7bb43cbd153-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.445523 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" event={"ID":"1013ab05-0b6e-458d-b876-e7bb43cbd153","Type":"ContainerDied","Data":"11f31e029fcb2d65e6ff2dbd160004c47109aa6f5b4de5d8001aeed195195878"} Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.445879 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11f31e029fcb2d65e6ff2dbd160004c47109aa6f5b4de5d8001aeed195195878" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.445952 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.540863 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6"] Oct 04 05:25:30 crc kubenswrapper[4802]: E1004 05:25:30.541323 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1013ab05-0b6e-458d-b876-e7bb43cbd153" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.541347 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="1013ab05-0b6e-458d-b876-e7bb43cbd153" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.541549 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="1013ab05-0b6e-458d-b876-e7bb43cbd153" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.542314 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.557716 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6"] Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.559139 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.559364 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.559498 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.559568 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.559686 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.577856 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.577927 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.577972 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.578098 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdqc\" (UniqueName: \"kubernetes.io/projected/c902dd3b-da2a-4755-8f50-b3e93d33630f-kube-api-access-krdqc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.578187 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.679448 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.679582 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.679657 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.679706 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.679759 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krdqc\" (UniqueName: \"kubernetes.io/projected/c902dd3b-da2a-4755-8f50-b3e93d33630f-kube-api-access-krdqc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.683581 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.684040 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.684274 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.689403 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.700835 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdqc\" (UniqueName: \"kubernetes.io/projected/c902dd3b-da2a-4755-8f50-b3e93d33630f-kube-api-access-krdqc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:30 crc kubenswrapper[4802]: I1004 05:25:30.858790 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:25:31 crc kubenswrapper[4802]: I1004 05:25:31.388248 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6"] Oct 04 05:25:31 crc kubenswrapper[4802]: I1004 05:25:31.397038 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 05:25:31 crc kubenswrapper[4802]: I1004 05:25:31.454984 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" event={"ID":"c902dd3b-da2a-4755-8f50-b3e93d33630f","Type":"ContainerStarted","Data":"73518b165f92f09440d1184fec79b11a58fcdbf2b1c867859c2f9a5ca09a1ec2"} Oct 04 05:25:33 crc kubenswrapper[4802]: I1004 05:25:33.473950 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" event={"ID":"c902dd3b-da2a-4755-8f50-b3e93d33630f","Type":"ContainerStarted","Data":"384a205ccf3fca94f1c5ebe7e89c042f7c727c20f5da63f96fa3c63d7e8fde28"} Oct 04 05:25:33 crc kubenswrapper[4802]: I1004 05:25:33.500540 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" podStartSLOduration=2.804228615 podStartE2EDuration="3.500517034s" podCreationTimestamp="2025-10-04 05:25:30 +0000 UTC" firstStartedPulling="2025-10-04 05:25:31.396810366 +0000 UTC m=+2373.804810991" lastFinishedPulling="2025-10-04 05:25:32.093098785 +0000 UTC m=+2374.501099410" observedRunningTime="2025-10-04 05:25:33.496102947 +0000 UTC m=+2375.904103582" watchObservedRunningTime="2025-10-04 05:25:33.500517034 +0000 UTC m=+2375.908517659" Oct 04 05:25:52 crc kubenswrapper[4802]: I1004 05:25:52.663289 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:25:52 crc kubenswrapper[4802]: I1004 05:25:52.663886 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:26:14 crc kubenswrapper[4802]: I1004 05:26:14.233730 4802 scope.go:117] "RemoveContainer" containerID="a7f0f903bd7da897033688e42fb3e5718eb6b099c2221998280eca50c9d7a35c" Oct 04 05:26:14 crc kubenswrapper[4802]: I1004 05:26:14.275202 4802 scope.go:117] "RemoveContainer" containerID="17f8ad48e79c8f80db07955016a510a82d2f6d35c6bfff093a630c00342ef442" Oct 04 05:26:22 crc kubenswrapper[4802]: I1004 05:26:22.662928 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:26:22 crc kubenswrapper[4802]: I1004 05:26:22.663504 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:26:22 crc kubenswrapper[4802]: I1004 05:26:22.663561 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 05:26:22 crc kubenswrapper[4802]: I1004 05:26:22.664397 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 05:26:22 crc kubenswrapper[4802]: I1004 05:26:22.664542 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" gracePeriod=600 Oct 04 05:26:22 crc kubenswrapper[4802]: E1004 05:26:22.812144 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:26:22 crc kubenswrapper[4802]: I1004 05:26:22.884323 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" exitCode=0 Oct 04 05:26:22 crc kubenswrapper[4802]: I1004 05:26:22.884364 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b"} Oct 04 05:26:22 crc kubenswrapper[4802]: I1004 05:26:22.884395 4802 scope.go:117] "RemoveContainer" containerID="908fdeba8f2be8d21d6e188a6dddef471cfd923d4ae6e235cc59265fd970e5c7" Oct 04 05:26:22 crc kubenswrapper[4802]: I1004 05:26:22.885022 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:26:22 crc kubenswrapper[4802]: E1004 05:26:22.885233 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:26:35 crc kubenswrapper[4802]: I1004 05:26:35.359929 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:26:35 crc kubenswrapper[4802]: E1004 05:26:35.361030 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:26:47 crc kubenswrapper[4802]: I1004 05:26:47.360191 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:26:47 crc kubenswrapper[4802]: E1004 05:26:47.362702 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:27:02 crc kubenswrapper[4802]: I1004 05:27:02.360104 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:27:02 crc kubenswrapper[4802]: E1004 05:27:02.360954 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:27:14 crc kubenswrapper[4802]: I1004 05:27:14.292459 4802 generic.go:334] "Generic (PLEG): container finished" podID="c902dd3b-da2a-4755-8f50-b3e93d33630f" containerID="384a205ccf3fca94f1c5ebe7e89c042f7c727c20f5da63f96fa3c63d7e8fde28" exitCode=0 Oct 04 05:27:14 crc kubenswrapper[4802]: I1004 05:27:14.292567 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" event={"ID":"c902dd3b-da2a-4755-8f50-b3e93d33630f","Type":"ContainerDied","Data":"384a205ccf3fca94f1c5ebe7e89c042f7c727c20f5da63f96fa3c63d7e8fde28"} Oct 04 05:27:15 crc kubenswrapper[4802]: I1004 05:27:15.793688 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:27:15 crc kubenswrapper[4802]: I1004 05:27:15.924537 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krdqc\" (UniqueName: \"kubernetes.io/projected/c902dd3b-da2a-4755-8f50-b3e93d33630f-kube-api-access-krdqc\") pod \"c902dd3b-da2a-4755-8f50-b3e93d33630f\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " Oct 04 05:27:15 crc kubenswrapper[4802]: I1004 05:27:15.924622 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-ceph\") pod \"c902dd3b-da2a-4755-8f50-b3e93d33630f\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " Oct 04 05:27:15 crc kubenswrapper[4802]: I1004 05:27:15.924927 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-bootstrap-combined-ca-bundle\") pod \"c902dd3b-da2a-4755-8f50-b3e93d33630f\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " Oct 04 05:27:15 crc kubenswrapper[4802]: I1004 05:27:15.925032 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-inventory\") pod \"c902dd3b-da2a-4755-8f50-b3e93d33630f\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " Oct 04 05:27:15 crc kubenswrapper[4802]: I1004 05:27:15.925067 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-ssh-key\") pod \"c902dd3b-da2a-4755-8f50-b3e93d33630f\" (UID: \"c902dd3b-da2a-4755-8f50-b3e93d33630f\") " Oct 04 05:27:15 crc kubenswrapper[4802]: I1004 05:27:15.931989 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c902dd3b-da2a-4755-8f50-b3e93d33630f-kube-api-access-krdqc" (OuterVolumeSpecName: "kube-api-access-krdqc") pod "c902dd3b-da2a-4755-8f50-b3e93d33630f" (UID: "c902dd3b-da2a-4755-8f50-b3e93d33630f"). InnerVolumeSpecName "kube-api-access-krdqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:27:15 crc kubenswrapper[4802]: I1004 05:27:15.936835 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-ceph" (OuterVolumeSpecName: "ceph") pod "c902dd3b-da2a-4755-8f50-b3e93d33630f" (UID: "c902dd3b-da2a-4755-8f50-b3e93d33630f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:27:15 crc kubenswrapper[4802]: I1004 05:27:15.942403 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c902dd3b-da2a-4755-8f50-b3e93d33630f" (UID: "c902dd3b-da2a-4755-8f50-b3e93d33630f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:27:15 crc kubenswrapper[4802]: I1004 05:27:15.955554 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c902dd3b-da2a-4755-8f50-b3e93d33630f" (UID: "c902dd3b-da2a-4755-8f50-b3e93d33630f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:27:15 crc kubenswrapper[4802]: I1004 05:27:15.957187 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-inventory" (OuterVolumeSpecName: "inventory") pod "c902dd3b-da2a-4755-8f50-b3e93d33630f" (UID: "c902dd3b-da2a-4755-8f50-b3e93d33630f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.027147 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krdqc\" (UniqueName: \"kubernetes.io/projected/c902dd3b-da2a-4755-8f50-b3e93d33630f-kube-api-access-krdqc\") on node \"crc\" DevicePath \"\"" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.027452 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.027466 4802 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.027475 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.027484 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c902dd3b-da2a-4755-8f50-b3e93d33630f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.311842 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" event={"ID":"c902dd3b-da2a-4755-8f50-b3e93d33630f","Type":"ContainerDied","Data":"73518b165f92f09440d1184fec79b11a58fcdbf2b1c867859c2f9a5ca09a1ec2"} Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.311897 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73518b165f92f09440d1184fec79b11a58fcdbf2b1c867859c2f9a5ca09a1ec2" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.311898 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.392918 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6"] Oct 04 05:27:16 crc kubenswrapper[4802]: E1004 05:27:16.393417 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c902dd3b-da2a-4755-8f50-b3e93d33630f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.393442 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c902dd3b-da2a-4755-8f50-b3e93d33630f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.393634 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="c902dd3b-da2a-4755-8f50-b3e93d33630f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.394427 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.396632 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.397061 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.397276 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.398686 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.398784 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.401859 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6"] Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.536820 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g89pm\" (UniqueName: \"kubernetes.io/projected/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-kube-api-access-g89pm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-245d6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.536927 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-245d6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.536954 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-245d6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.537052 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-245d6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.639360 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g89pm\" (UniqueName: \"kubernetes.io/projected/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-kube-api-access-g89pm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-245d6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.639470 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-245d6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.639500 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-245d6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.639523 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-245d6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.644239 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-245d6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.645640 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-245d6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.646675 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-245d6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.658160 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g89pm\" (UniqueName: \"kubernetes.io/projected/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-kube-api-access-g89pm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-245d6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:16 crc kubenswrapper[4802]: I1004 05:27:16.715996 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:17 crc kubenswrapper[4802]: I1004 05:27:17.231751 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6"] Oct 04 05:27:17 crc kubenswrapper[4802]: I1004 05:27:17.320247 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" event={"ID":"b9ca1c91-cdae-401a-aa21-c5326e8afdb6","Type":"ContainerStarted","Data":"a52afba95c90b0d1ddbdc5f85234b0137fcd36c9215ab604de4b3a1ce564f096"} Oct 04 05:27:17 crc kubenswrapper[4802]: I1004 05:27:17.360135 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:27:17 crc kubenswrapper[4802]: E1004 05:27:17.360496 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:27:18 crc kubenswrapper[4802]: I1004 05:27:18.329129 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" event={"ID":"b9ca1c91-cdae-401a-aa21-c5326e8afdb6","Type":"ContainerStarted","Data":"07053f5fb114248f0443bb98ab3f00644dcea7387e0b0abd6fb7faee3e71c525"} Oct 04 05:27:18 crc kubenswrapper[4802]: I1004 05:27:18.344012 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" podStartSLOduration=1.546220011 podStartE2EDuration="2.343995588s" podCreationTimestamp="2025-10-04 05:27:16 +0000 UTC" firstStartedPulling="2025-10-04 05:27:17.241149103 +0000 UTC m=+2479.649149728" lastFinishedPulling="2025-10-04 05:27:18.03892468 +0000 UTC m=+2480.446925305" observedRunningTime="2025-10-04 05:27:18.342420453 +0000 UTC m=+2480.750421068" watchObservedRunningTime="2025-10-04 05:27:18.343995588 +0000 UTC m=+2480.751996203" Oct 04 05:27:30 crc kubenswrapper[4802]: I1004 05:27:30.360924 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:27:30 crc kubenswrapper[4802]: E1004 05:27:30.362252 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:27:42 crc kubenswrapper[4802]: I1004 05:27:42.359996 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:27:42 crc kubenswrapper[4802]: E1004 05:27:42.360909 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:27:43 crc kubenswrapper[4802]: I1004 05:27:43.538127 4802 generic.go:334] "Generic (PLEG): container finished" podID="b9ca1c91-cdae-401a-aa21-c5326e8afdb6" containerID="07053f5fb114248f0443bb98ab3f00644dcea7387e0b0abd6fb7faee3e71c525" exitCode=0 Oct 04 05:27:43 crc kubenswrapper[4802]: I1004 05:27:43.538256 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" event={"ID":"b9ca1c91-cdae-401a-aa21-c5326e8afdb6","Type":"ContainerDied","Data":"07053f5fb114248f0443bb98ab3f00644dcea7387e0b0abd6fb7faee3e71c525"} Oct 04 05:27:44 crc kubenswrapper[4802]: I1004 05:27:44.927077 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:44 crc kubenswrapper[4802]: I1004 05:27:44.995362 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-inventory\") pod \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " Oct 04 05:27:44 crc kubenswrapper[4802]: I1004 05:27:44.995534 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g89pm\" (UniqueName: \"kubernetes.io/projected/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-kube-api-access-g89pm\") pod \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " Oct 04 05:27:44 crc kubenswrapper[4802]: I1004 05:27:44.995586 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-ceph\") pod \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " Oct 04 05:27:44 crc kubenswrapper[4802]: I1004 05:27:44.995759 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-ssh-key\") pod \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\" (UID: \"b9ca1c91-cdae-401a-aa21-c5326e8afdb6\") " Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.003216 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-kube-api-access-g89pm" (OuterVolumeSpecName: "kube-api-access-g89pm") pod "b9ca1c91-cdae-401a-aa21-c5326e8afdb6" (UID: "b9ca1c91-cdae-401a-aa21-c5326e8afdb6"). InnerVolumeSpecName "kube-api-access-g89pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.003742 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-ceph" (OuterVolumeSpecName: "ceph") pod "b9ca1c91-cdae-401a-aa21-c5326e8afdb6" (UID: "b9ca1c91-cdae-401a-aa21-c5326e8afdb6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.027201 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-inventory" (OuterVolumeSpecName: "inventory") pod "b9ca1c91-cdae-401a-aa21-c5326e8afdb6" (UID: "b9ca1c91-cdae-401a-aa21-c5326e8afdb6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.029091 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b9ca1c91-cdae-401a-aa21-c5326e8afdb6" (UID: "b9ca1c91-cdae-401a-aa21-c5326e8afdb6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.098821 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.098862 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.098879 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g89pm\" (UniqueName: \"kubernetes.io/projected/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-kube-api-access-g89pm\") on node \"crc\" DevicePath \"\"" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.098891 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b9ca1c91-cdae-401a-aa21-c5326e8afdb6-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.556963 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" event={"ID":"b9ca1c91-cdae-401a-aa21-c5326e8afdb6","Type":"ContainerDied","Data":"a52afba95c90b0d1ddbdc5f85234b0137fcd36c9215ab604de4b3a1ce564f096"} Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.557419 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a52afba95c90b0d1ddbdc5f85234b0137fcd36c9215ab604de4b3a1ce564f096" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.557081 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-245d6" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.632069 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr"] Oct 04 05:27:45 crc kubenswrapper[4802]: E1004 05:27:45.632425 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ca1c91-cdae-401a-aa21-c5326e8afdb6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.632448 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ca1c91-cdae-401a-aa21-c5326e8afdb6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.632820 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ca1c91-cdae-401a-aa21-c5326e8afdb6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.633439 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.640821 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.641347 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.641563 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.641764 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.642052 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.647838 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr"] Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.709859 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.709912 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wl4j\" (UniqueName: \"kubernetes.io/projected/0a7e40d1-1503-445c-997e-4094ec553767-kube-api-access-2wl4j\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.710175 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.710263 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.811876 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.811936 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wl4j\" (UniqueName: \"kubernetes.io/projected/0a7e40d1-1503-445c-997e-4094ec553767-kube-api-access-2wl4j\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.812075 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.812109 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.817146 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.817146 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.819666 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.834285 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wl4j\" (UniqueName: \"kubernetes.io/projected/0a7e40d1-1503-445c-997e-4094ec553767-kube-api-access-2wl4j\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:45 crc kubenswrapper[4802]: I1004 05:27:45.952256 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:46 crc kubenswrapper[4802]: I1004 05:27:46.454031 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr"] Oct 04 05:27:46 crc kubenswrapper[4802]: I1004 05:27:46.567286 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" event={"ID":"0a7e40d1-1503-445c-997e-4094ec553767","Type":"ContainerStarted","Data":"035e5635687dcdec699f3e29472be3d7125c68fc6112348ddcb8dccb6f686395"} Oct 04 05:27:48 crc kubenswrapper[4802]: I1004 05:27:48.584701 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" event={"ID":"0a7e40d1-1503-445c-997e-4094ec553767","Type":"ContainerStarted","Data":"0177c8a4a5aac541a07cc027e1872278f6366fe6384f4cd43db9cfef216424ca"} Oct 04 05:27:48 crc kubenswrapper[4802]: I1004 05:27:48.608036 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" podStartSLOduration=2.227401251 podStartE2EDuration="3.608020438s" podCreationTimestamp="2025-10-04 05:27:45 +0000 UTC" firstStartedPulling="2025-10-04 05:27:46.468873613 +0000 UTC m=+2508.876874238" lastFinishedPulling="2025-10-04 05:27:47.8494928 +0000 UTC m=+2510.257493425" observedRunningTime="2025-10-04 05:27:48.604910828 +0000 UTC m=+2511.012911463" watchObservedRunningTime="2025-10-04 05:27:48.608020438 +0000 UTC m=+2511.016021063" Oct 04 05:27:53 crc kubenswrapper[4802]: I1004 05:27:53.360328 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:27:53 crc kubenswrapper[4802]: E1004 05:27:53.361140 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:27:53 crc kubenswrapper[4802]: I1004 05:27:53.622313 4802 generic.go:334] "Generic (PLEG): container finished" podID="0a7e40d1-1503-445c-997e-4094ec553767" containerID="0177c8a4a5aac541a07cc027e1872278f6366fe6384f4cd43db9cfef216424ca" exitCode=0 Oct 04 05:27:53 crc kubenswrapper[4802]: I1004 05:27:53.622376 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" event={"ID":"0a7e40d1-1503-445c-997e-4094ec553767","Type":"ContainerDied","Data":"0177c8a4a5aac541a07cc027e1872278f6366fe6384f4cd43db9cfef216424ca"} Oct 04 05:27:54 crc kubenswrapper[4802]: I1004 05:27:54.990945 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.094174 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-ssh-key\") pod \"0a7e40d1-1503-445c-997e-4094ec553767\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.094314 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-inventory\") pod \"0a7e40d1-1503-445c-997e-4094ec553767\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.094352 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-ceph\") pod \"0a7e40d1-1503-445c-997e-4094ec553767\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.094430 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wl4j\" (UniqueName: \"kubernetes.io/projected/0a7e40d1-1503-445c-997e-4094ec553767-kube-api-access-2wl4j\") pod \"0a7e40d1-1503-445c-997e-4094ec553767\" (UID: \"0a7e40d1-1503-445c-997e-4094ec553767\") " Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.099784 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-ceph" (OuterVolumeSpecName: "ceph") pod "0a7e40d1-1503-445c-997e-4094ec553767" (UID: "0a7e40d1-1503-445c-997e-4094ec553767"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.103845 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7e40d1-1503-445c-997e-4094ec553767-kube-api-access-2wl4j" (OuterVolumeSpecName: "kube-api-access-2wl4j") pod "0a7e40d1-1503-445c-997e-4094ec553767" (UID: "0a7e40d1-1503-445c-997e-4094ec553767"). InnerVolumeSpecName "kube-api-access-2wl4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.122232 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-inventory" (OuterVolumeSpecName: "inventory") pod "0a7e40d1-1503-445c-997e-4094ec553767" (UID: "0a7e40d1-1503-445c-997e-4094ec553767"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.123728 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0a7e40d1-1503-445c-997e-4094ec553767" (UID: "0a7e40d1-1503-445c-997e-4094ec553767"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.197087 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.197133 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.197146 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a7e40d1-1503-445c-997e-4094ec553767-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.197156 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wl4j\" (UniqueName: \"kubernetes.io/projected/0a7e40d1-1503-445c-997e-4094ec553767-kube-api-access-2wl4j\") on node \"crc\" DevicePath \"\"" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.638205 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" event={"ID":"0a7e40d1-1503-445c-997e-4094ec553767","Type":"ContainerDied","Data":"035e5635687dcdec699f3e29472be3d7125c68fc6112348ddcb8dccb6f686395"} Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.638247 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="035e5635687dcdec699f3e29472be3d7125c68fc6112348ddcb8dccb6f686395" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.638309 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.707337 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw"] Oct 04 05:27:55 crc kubenswrapper[4802]: E1004 05:27:55.707830 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7e40d1-1503-445c-997e-4094ec553767" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.707860 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7e40d1-1503-445c-997e-4094ec553767" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.708080 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7e40d1-1503-445c-997e-4094ec553767" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.708808 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.712050 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.712212 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.712371 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.712423 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.712376 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.719592 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw"] Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.810460 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7c9sw\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.810538 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7c9sw\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.810835 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7c9sw\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.810887 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm5mb\" (UniqueName: \"kubernetes.io/projected/aa515445-486c-4d0f-94ac-2f0bb785120f-kube-api-access-zm5mb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7c9sw\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.912598 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7c9sw\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.912660 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm5mb\" (UniqueName: \"kubernetes.io/projected/aa515445-486c-4d0f-94ac-2f0bb785120f-kube-api-access-zm5mb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7c9sw\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.912723 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7c9sw\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.912779 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7c9sw\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.918337 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7c9sw\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.918651 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7c9sw\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.927337 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7c9sw\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:55 crc kubenswrapper[4802]: I1004 05:27:55.929398 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm5mb\" (UniqueName: \"kubernetes.io/projected/aa515445-486c-4d0f-94ac-2f0bb785120f-kube-api-access-zm5mb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7c9sw\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:56 crc kubenswrapper[4802]: I1004 05:27:56.023926 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:27:56 crc kubenswrapper[4802]: I1004 05:27:56.515894 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw"] Oct 04 05:27:56 crc kubenswrapper[4802]: I1004 05:27:56.646509 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" event={"ID":"aa515445-486c-4d0f-94ac-2f0bb785120f","Type":"ContainerStarted","Data":"3b75cf6b042f42a40e09a14ced7c8532f933fa63d073f80f401355f7783d3324"} Oct 04 05:27:57 crc kubenswrapper[4802]: I1004 05:27:57.654478 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" event={"ID":"aa515445-486c-4d0f-94ac-2f0bb785120f","Type":"ContainerStarted","Data":"d6f1267d569ddc43bd9c7538c270b77631ca813aed391fb0b8703d6bf42958ff"} Oct 04 05:27:57 crc kubenswrapper[4802]: I1004 05:27:57.668553 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" podStartSLOduration=1.9879274850000002 podStartE2EDuration="2.668536941s" podCreationTimestamp="2025-10-04 05:27:55 +0000 UTC" firstStartedPulling="2025-10-04 05:27:56.520067072 +0000 UTC m=+2518.928067697" lastFinishedPulling="2025-10-04 05:27:57.200676528 +0000 UTC m=+2519.608677153" observedRunningTime="2025-10-04 05:27:57.666140082 +0000 UTC m=+2520.074140727" watchObservedRunningTime="2025-10-04 05:27:57.668536941 +0000 UTC m=+2520.076537566" Oct 04 05:28:04 crc kubenswrapper[4802]: I1004 05:28:04.360001 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:28:04 crc kubenswrapper[4802]: E1004 05:28:04.360800 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:28:19 crc kubenswrapper[4802]: I1004 05:28:19.359629 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:28:19 crc kubenswrapper[4802]: E1004 05:28:19.363118 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:28:30 crc kubenswrapper[4802]: I1004 05:28:30.360338 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:28:30 crc kubenswrapper[4802]: E1004 05:28:30.361270 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:28:30 crc kubenswrapper[4802]: I1004 05:28:30.918121 4802 generic.go:334] "Generic (PLEG): container finished" podID="aa515445-486c-4d0f-94ac-2f0bb785120f" containerID="d6f1267d569ddc43bd9c7538c270b77631ca813aed391fb0b8703d6bf42958ff" exitCode=0 Oct 04 05:28:30 crc kubenswrapper[4802]: I1004 05:28:30.918170 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" event={"ID":"aa515445-486c-4d0f-94ac-2f0bb785120f","Type":"ContainerDied","Data":"d6f1267d569ddc43bd9c7538c270b77631ca813aed391fb0b8703d6bf42958ff"} Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.324690 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.460294 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-ssh-key\") pod \"aa515445-486c-4d0f-94ac-2f0bb785120f\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.460373 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-inventory\") pod \"aa515445-486c-4d0f-94ac-2f0bb785120f\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.460398 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-ceph\") pod \"aa515445-486c-4d0f-94ac-2f0bb785120f\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.460430 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm5mb\" (UniqueName: \"kubernetes.io/projected/aa515445-486c-4d0f-94ac-2f0bb785120f-kube-api-access-zm5mb\") pod \"aa515445-486c-4d0f-94ac-2f0bb785120f\" (UID: \"aa515445-486c-4d0f-94ac-2f0bb785120f\") " Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.468665 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa515445-486c-4d0f-94ac-2f0bb785120f-kube-api-access-zm5mb" (OuterVolumeSpecName: "kube-api-access-zm5mb") pod "aa515445-486c-4d0f-94ac-2f0bb785120f" (UID: "aa515445-486c-4d0f-94ac-2f0bb785120f"). InnerVolumeSpecName "kube-api-access-zm5mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.468883 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-ceph" (OuterVolumeSpecName: "ceph") pod "aa515445-486c-4d0f-94ac-2f0bb785120f" (UID: "aa515445-486c-4d0f-94ac-2f0bb785120f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.486272 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "aa515445-486c-4d0f-94ac-2f0bb785120f" (UID: "aa515445-486c-4d0f-94ac-2f0bb785120f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.488438 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-inventory" (OuterVolumeSpecName: "inventory") pod "aa515445-486c-4d0f-94ac-2f0bb785120f" (UID: "aa515445-486c-4d0f-94ac-2f0bb785120f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.562807 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.562864 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.562878 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa515445-486c-4d0f-94ac-2f0bb785120f-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.562892 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm5mb\" (UniqueName: \"kubernetes.io/projected/aa515445-486c-4d0f-94ac-2f0bb785120f-kube-api-access-zm5mb\") on node \"crc\" DevicePath \"\"" Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.949602 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" event={"ID":"aa515445-486c-4d0f-94ac-2f0bb785120f","Type":"ContainerDied","Data":"3b75cf6b042f42a40e09a14ced7c8532f933fa63d073f80f401355f7783d3324"} Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.950042 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b75cf6b042f42a40e09a14ced7c8532f933fa63d073f80f401355f7783d3324" Oct 04 05:28:32 crc kubenswrapper[4802]: I1004 05:28:32.949701 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7c9sw" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.012804 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh"] Oct 04 05:28:33 crc kubenswrapper[4802]: E1004 05:28:33.013251 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa515445-486c-4d0f-94ac-2f0bb785120f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.013271 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa515445-486c-4d0f-94ac-2f0bb785120f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.013436 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa515445-486c-4d0f-94ac-2f0bb785120f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.014126 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.017656 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.017810 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.017928 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.018279 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.018350 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.024193 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh"] Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.070288 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.070407 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxslp\" (UniqueName: \"kubernetes.io/projected/852da720-96fe-413d-8126-89ebf6f859ea-kube-api-access-gxslp\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.070484 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.070623 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.172791 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.172946 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.173001 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.173058 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxslp\" (UniqueName: \"kubernetes.io/projected/852da720-96fe-413d-8126-89ebf6f859ea-kube-api-access-gxslp\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.177435 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.177871 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.178190 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.192397 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxslp\" (UniqueName: \"kubernetes.io/projected/852da720-96fe-413d-8126-89ebf6f859ea-kube-api-access-gxslp\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.330206 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.853756 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh"] Oct 04 05:28:33 crc kubenswrapper[4802]: I1004 05:28:33.958038 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" event={"ID":"852da720-96fe-413d-8126-89ebf6f859ea","Type":"ContainerStarted","Data":"67eef95f39b41fde940f408789e4b07bc4dadc2275d94d4a2486ebbe0c0e5589"} Oct 04 05:28:34 crc kubenswrapper[4802]: I1004 05:28:34.967820 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" event={"ID":"852da720-96fe-413d-8126-89ebf6f859ea","Type":"ContainerStarted","Data":"b7d38f24fe9abee171bb701239acda63707c61f78dfbcaa8272da7cc25d7526f"} Oct 04 05:28:34 crc kubenswrapper[4802]: I1004 05:28:34.988187 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" podStartSLOduration=2.431705521 podStartE2EDuration="2.988166963s" podCreationTimestamp="2025-10-04 05:28:32 +0000 UTC" firstStartedPulling="2025-10-04 05:28:33.862617725 +0000 UTC m=+2556.270618370" lastFinishedPulling="2025-10-04 05:28:34.419079187 +0000 UTC m=+2556.827079812" observedRunningTime="2025-10-04 05:28:34.984449036 +0000 UTC m=+2557.392449661" watchObservedRunningTime="2025-10-04 05:28:34.988166963 +0000 UTC m=+2557.396167588" Oct 04 05:28:39 crc kubenswrapper[4802]: I1004 05:28:39.000564 4802 generic.go:334] "Generic (PLEG): container finished" podID="852da720-96fe-413d-8126-89ebf6f859ea" containerID="b7d38f24fe9abee171bb701239acda63707c61f78dfbcaa8272da7cc25d7526f" exitCode=0 Oct 04 05:28:39 crc kubenswrapper[4802]: I1004 05:28:39.000678 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" event={"ID":"852da720-96fe-413d-8126-89ebf6f859ea","Type":"ContainerDied","Data":"b7d38f24fe9abee171bb701239acda63707c61f78dfbcaa8272da7cc25d7526f"} Oct 04 05:28:40 crc kubenswrapper[4802]: I1004 05:28:40.420750 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:40 crc kubenswrapper[4802]: I1004 05:28:40.516245 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-inventory\") pod \"852da720-96fe-413d-8126-89ebf6f859ea\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " Oct 04 05:28:40 crc kubenswrapper[4802]: I1004 05:28:40.516350 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-ceph\") pod \"852da720-96fe-413d-8126-89ebf6f859ea\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " Oct 04 05:28:40 crc kubenswrapper[4802]: I1004 05:28:40.516420 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxslp\" (UniqueName: \"kubernetes.io/projected/852da720-96fe-413d-8126-89ebf6f859ea-kube-api-access-gxslp\") pod \"852da720-96fe-413d-8126-89ebf6f859ea\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " Oct 04 05:28:40 crc kubenswrapper[4802]: I1004 05:28:40.516439 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-ssh-key\") pod \"852da720-96fe-413d-8126-89ebf6f859ea\" (UID: \"852da720-96fe-413d-8126-89ebf6f859ea\") " Oct 04 05:28:40 crc kubenswrapper[4802]: I1004 05:28:40.527173 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-ceph" (OuterVolumeSpecName: "ceph") pod "852da720-96fe-413d-8126-89ebf6f859ea" (UID: "852da720-96fe-413d-8126-89ebf6f859ea"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:28:40 crc kubenswrapper[4802]: I1004 05:28:40.531841 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/852da720-96fe-413d-8126-89ebf6f859ea-kube-api-access-gxslp" (OuterVolumeSpecName: "kube-api-access-gxslp") pod "852da720-96fe-413d-8126-89ebf6f859ea" (UID: "852da720-96fe-413d-8126-89ebf6f859ea"). InnerVolumeSpecName "kube-api-access-gxslp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:28:40 crc kubenswrapper[4802]: I1004 05:28:40.543008 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-inventory" (OuterVolumeSpecName: "inventory") pod "852da720-96fe-413d-8126-89ebf6f859ea" (UID: "852da720-96fe-413d-8126-89ebf6f859ea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:28:40 crc kubenswrapper[4802]: I1004 05:28:40.545747 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "852da720-96fe-413d-8126-89ebf6f859ea" (UID: "852da720-96fe-413d-8126-89ebf6f859ea"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:28:40 crc kubenswrapper[4802]: I1004 05:28:40.618730 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:28:40 crc kubenswrapper[4802]: I1004 05:28:40.619044 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:28:40 crc kubenswrapper[4802]: I1004 05:28:40.619060 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxslp\" (UniqueName: \"kubernetes.io/projected/852da720-96fe-413d-8126-89ebf6f859ea-kube-api-access-gxslp\") on node \"crc\" DevicePath \"\"" Oct 04 05:28:40 crc kubenswrapper[4802]: I1004 05:28:40.619075 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/852da720-96fe-413d-8126-89ebf6f859ea-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.020971 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" event={"ID":"852da720-96fe-413d-8126-89ebf6f859ea","Type":"ContainerDied","Data":"67eef95f39b41fde940f408789e4b07bc4dadc2275d94d4a2486ebbe0c0e5589"} Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.021009 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67eef95f39b41fde940f408789e4b07bc4dadc2275d94d4a2486ebbe0c0e5589" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.021057 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.094387 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2"] Oct 04 05:28:41 crc kubenswrapper[4802]: E1004 05:28:41.095196 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852da720-96fe-413d-8126-89ebf6f859ea" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.095217 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="852da720-96fe-413d-8126-89ebf6f859ea" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.095468 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="852da720-96fe-413d-8126-89ebf6f859ea" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.096261 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.098181 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.099024 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.099062 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.099179 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.101191 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.102576 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2"] Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.232083 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npkj2\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.232280 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npkj2\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.232350 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npkj2\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.232504 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n95vw\" (UniqueName: \"kubernetes.io/projected/f08fb8cd-38aa-4cde-9321-43ae01965484-kube-api-access-n95vw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npkj2\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.334806 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n95vw\" (UniqueName: \"kubernetes.io/projected/f08fb8cd-38aa-4cde-9321-43ae01965484-kube-api-access-n95vw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npkj2\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.334930 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npkj2\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.334995 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npkj2\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.335023 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npkj2\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.339935 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npkj2\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.340019 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npkj2\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.345204 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npkj2\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.352717 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n95vw\" (UniqueName: \"kubernetes.io/projected/f08fb8cd-38aa-4cde-9321-43ae01965484-kube-api-access-n95vw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-npkj2\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.426908 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:28:41 crc kubenswrapper[4802]: I1004 05:28:41.924725 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2"] Oct 04 05:28:41 crc kubenswrapper[4802]: W1004 05:28:41.926015 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf08fb8cd_38aa_4cde_9321_43ae01965484.slice/crio-5f00bfb5f8dcda6f14d4c60795479a4e63a8936196f7e9f059b1d34aee7afa12 WatchSource:0}: Error finding container 5f00bfb5f8dcda6f14d4c60795479a4e63a8936196f7e9f059b1d34aee7afa12: Status 404 returned error can't find the container with id 5f00bfb5f8dcda6f14d4c60795479a4e63a8936196f7e9f059b1d34aee7afa12 Oct 04 05:28:42 crc kubenswrapper[4802]: I1004 05:28:42.030434 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" event={"ID":"f08fb8cd-38aa-4cde-9321-43ae01965484","Type":"ContainerStarted","Data":"5f00bfb5f8dcda6f14d4c60795479a4e63a8936196f7e9f059b1d34aee7afa12"} Oct 04 05:28:42 crc kubenswrapper[4802]: I1004 05:28:42.360246 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:28:42 crc kubenswrapper[4802]: E1004 05:28:42.360624 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:28:43 crc kubenswrapper[4802]: I1004 05:28:43.039869 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" event={"ID":"f08fb8cd-38aa-4cde-9321-43ae01965484","Type":"ContainerStarted","Data":"40b738fc870c7a142abfda9004185eeed95b7b477cf772fc875b72722cfb9342"} Oct 04 05:28:43 crc kubenswrapper[4802]: I1004 05:28:43.071707 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" podStartSLOduration=1.519839202 podStartE2EDuration="2.071686342s" podCreationTimestamp="2025-10-04 05:28:41 +0000 UTC" firstStartedPulling="2025-10-04 05:28:41.930653688 +0000 UTC m=+2564.338654313" lastFinishedPulling="2025-10-04 05:28:42.482500828 +0000 UTC m=+2564.890501453" observedRunningTime="2025-10-04 05:28:43.061563041 +0000 UTC m=+2565.469563686" watchObservedRunningTime="2025-10-04 05:28:43.071686342 +0000 UTC m=+2565.479686967" Oct 04 05:28:55 crc kubenswrapper[4802]: I1004 05:28:55.360204 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:28:55 crc kubenswrapper[4802]: E1004 05:28:55.361254 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:29:08 crc kubenswrapper[4802]: I1004 05:29:08.367280 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:29:08 crc kubenswrapper[4802]: E1004 05:29:08.368189 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:29:20 crc kubenswrapper[4802]: I1004 05:29:20.360143 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:29:20 crc kubenswrapper[4802]: E1004 05:29:20.360932 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:29:25 crc kubenswrapper[4802]: I1004 05:29:25.353563 4802 generic.go:334] "Generic (PLEG): container finished" podID="f08fb8cd-38aa-4cde-9321-43ae01965484" containerID="40b738fc870c7a142abfda9004185eeed95b7b477cf772fc875b72722cfb9342" exitCode=0 Oct 04 05:29:25 crc kubenswrapper[4802]: I1004 05:29:25.353689 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" event={"ID":"f08fb8cd-38aa-4cde-9321-43ae01965484","Type":"ContainerDied","Data":"40b738fc870c7a142abfda9004185eeed95b7b477cf772fc875b72722cfb9342"} Oct 04 05:29:26 crc kubenswrapper[4802]: I1004 05:29:26.747591 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:29:26 crc kubenswrapper[4802]: I1004 05:29:26.826345 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n95vw\" (UniqueName: \"kubernetes.io/projected/f08fb8cd-38aa-4cde-9321-43ae01965484-kube-api-access-n95vw\") pod \"f08fb8cd-38aa-4cde-9321-43ae01965484\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " Oct 04 05:29:26 crc kubenswrapper[4802]: I1004 05:29:26.826508 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-ssh-key\") pod \"f08fb8cd-38aa-4cde-9321-43ae01965484\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " Oct 04 05:29:26 crc kubenswrapper[4802]: I1004 05:29:26.826567 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-ceph\") pod \"f08fb8cd-38aa-4cde-9321-43ae01965484\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " Oct 04 05:29:26 crc kubenswrapper[4802]: I1004 05:29:26.826610 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-inventory\") pod \"f08fb8cd-38aa-4cde-9321-43ae01965484\" (UID: \"f08fb8cd-38aa-4cde-9321-43ae01965484\") " Oct 04 05:29:26 crc kubenswrapper[4802]: I1004 05:29:26.832936 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f08fb8cd-38aa-4cde-9321-43ae01965484-kube-api-access-n95vw" (OuterVolumeSpecName: "kube-api-access-n95vw") pod "f08fb8cd-38aa-4cde-9321-43ae01965484" (UID: "f08fb8cd-38aa-4cde-9321-43ae01965484"). InnerVolumeSpecName "kube-api-access-n95vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:29:26 crc kubenswrapper[4802]: I1004 05:29:26.833042 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-ceph" (OuterVolumeSpecName: "ceph") pod "f08fb8cd-38aa-4cde-9321-43ae01965484" (UID: "f08fb8cd-38aa-4cde-9321-43ae01965484"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:29:26 crc kubenswrapper[4802]: I1004 05:29:26.857384 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-inventory" (OuterVolumeSpecName: "inventory") pod "f08fb8cd-38aa-4cde-9321-43ae01965484" (UID: "f08fb8cd-38aa-4cde-9321-43ae01965484"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:29:26 crc kubenswrapper[4802]: I1004 05:29:26.858712 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f08fb8cd-38aa-4cde-9321-43ae01965484" (UID: "f08fb8cd-38aa-4cde-9321-43ae01965484"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:29:26 crc kubenswrapper[4802]: I1004 05:29:26.928814 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:29:26 crc kubenswrapper[4802]: I1004 05:29:26.928856 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:29:26 crc kubenswrapper[4802]: I1004 05:29:26.928865 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f08fb8cd-38aa-4cde-9321-43ae01965484-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:29:26 crc kubenswrapper[4802]: I1004 05:29:26.928875 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n95vw\" (UniqueName: \"kubernetes.io/projected/f08fb8cd-38aa-4cde-9321-43ae01965484-kube-api-access-n95vw\") on node \"crc\" DevicePath \"\"" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.375832 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" event={"ID":"f08fb8cd-38aa-4cde-9321-43ae01965484","Type":"ContainerDied","Data":"5f00bfb5f8dcda6f14d4c60795479a4e63a8936196f7e9f059b1d34aee7afa12"} Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.376203 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f00bfb5f8dcda6f14d4c60795479a4e63a8936196f7e9f059b1d34aee7afa12" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.375903 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-npkj2" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.446602 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vcbwm"] Oct 04 05:29:27 crc kubenswrapper[4802]: E1004 05:29:27.447080 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08fb8cd-38aa-4cde-9321-43ae01965484" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.447107 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08fb8cd-38aa-4cde-9321-43ae01965484" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.447336 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="f08fb8cd-38aa-4cde-9321-43ae01965484" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.448077 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.451579 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.451676 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.451791 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.452057 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.452372 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.460036 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vcbwm"] Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.582904 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-ceph\") pod \"ssh-known-hosts-edpm-deployment-vcbwm\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.583001 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vcbwm\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.583028 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg9j9\" (UniqueName: \"kubernetes.io/projected/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-kube-api-access-fg9j9\") pod \"ssh-known-hosts-edpm-deployment-vcbwm\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.583148 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vcbwm\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.685820 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg9j9\" (UniqueName: \"kubernetes.io/projected/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-kube-api-access-fg9j9\") pod \"ssh-known-hosts-edpm-deployment-vcbwm\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.686056 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vcbwm\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.686435 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-ceph\") pod \"ssh-known-hosts-edpm-deployment-vcbwm\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.686812 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vcbwm\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.690301 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vcbwm\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.690352 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-ceph\") pod \"ssh-known-hosts-edpm-deployment-vcbwm\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.693896 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vcbwm\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.707050 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg9j9\" (UniqueName: \"kubernetes.io/projected/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-kube-api-access-fg9j9\") pod \"ssh-known-hosts-edpm-deployment-vcbwm\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:27 crc kubenswrapper[4802]: I1004 05:29:27.767894 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:28 crc kubenswrapper[4802]: I1004 05:29:28.282546 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vcbwm"] Oct 04 05:29:28 crc kubenswrapper[4802]: I1004 05:29:28.386677 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" event={"ID":"a9517a29-8dbc-4973-bdbf-67bf3f9bddde","Type":"ContainerStarted","Data":"2aacf5e165c2c17a4ea313b754c2bfd560d0877a3dea3fc439c8272ad6c645d1"} Oct 04 05:29:29 crc kubenswrapper[4802]: I1004 05:29:29.397169 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" event={"ID":"a9517a29-8dbc-4973-bdbf-67bf3f9bddde","Type":"ContainerStarted","Data":"c79ab5b32eaedcce9d7d51d09e222b569a017163eaccd2c775a2b1fc2dae2671"} Oct 04 05:29:29 crc kubenswrapper[4802]: I1004 05:29:29.421706 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" podStartSLOduration=1.786013633 podStartE2EDuration="2.421676692s" podCreationTimestamp="2025-10-04 05:29:27 +0000 UTC" firstStartedPulling="2025-10-04 05:29:28.294712133 +0000 UTC m=+2610.702712758" lastFinishedPulling="2025-10-04 05:29:28.930375192 +0000 UTC m=+2611.338375817" observedRunningTime="2025-10-04 05:29:29.411152709 +0000 UTC m=+2611.819153334" watchObservedRunningTime="2025-10-04 05:29:29.421676692 +0000 UTC m=+2611.829677347" Oct 04 05:29:34 crc kubenswrapper[4802]: I1004 05:29:34.360284 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:29:34 crc kubenswrapper[4802]: E1004 05:29:34.360947 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:29:37 crc kubenswrapper[4802]: I1004 05:29:37.457482 4802 generic.go:334] "Generic (PLEG): container finished" podID="a9517a29-8dbc-4973-bdbf-67bf3f9bddde" containerID="c79ab5b32eaedcce9d7d51d09e222b569a017163eaccd2c775a2b1fc2dae2671" exitCode=0 Oct 04 05:29:37 crc kubenswrapper[4802]: I1004 05:29:37.457763 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" event={"ID":"a9517a29-8dbc-4973-bdbf-67bf3f9bddde","Type":"ContainerDied","Data":"c79ab5b32eaedcce9d7d51d09e222b569a017163eaccd2c775a2b1fc2dae2671"} Oct 04 05:29:38 crc kubenswrapper[4802]: I1004 05:29:38.839424 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:38 crc kubenswrapper[4802]: I1004 05:29:38.980327 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-ceph\") pod \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " Oct 04 05:29:38 crc kubenswrapper[4802]: I1004 05:29:38.980574 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-ssh-key-openstack-edpm-ipam\") pod \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " Oct 04 05:29:38 crc kubenswrapper[4802]: I1004 05:29:38.980684 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg9j9\" (UniqueName: \"kubernetes.io/projected/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-kube-api-access-fg9j9\") pod \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " Oct 04 05:29:38 crc kubenswrapper[4802]: I1004 05:29:38.980756 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-inventory-0\") pod \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\" (UID: \"a9517a29-8dbc-4973-bdbf-67bf3f9bddde\") " Oct 04 05:29:38 crc kubenswrapper[4802]: I1004 05:29:38.986860 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-kube-api-access-fg9j9" (OuterVolumeSpecName: "kube-api-access-fg9j9") pod "a9517a29-8dbc-4973-bdbf-67bf3f9bddde" (UID: "a9517a29-8dbc-4973-bdbf-67bf3f9bddde"). InnerVolumeSpecName "kube-api-access-fg9j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:29:38 crc kubenswrapper[4802]: I1004 05:29:38.987089 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-ceph" (OuterVolumeSpecName: "ceph") pod "a9517a29-8dbc-4973-bdbf-67bf3f9bddde" (UID: "a9517a29-8dbc-4973-bdbf-67bf3f9bddde"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.007124 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "a9517a29-8dbc-4973-bdbf-67bf3f9bddde" (UID: "a9517a29-8dbc-4973-bdbf-67bf3f9bddde"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.007973 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a9517a29-8dbc-4973-bdbf-67bf3f9bddde" (UID: "a9517a29-8dbc-4973-bdbf-67bf3f9bddde"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.083121 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg9j9\" (UniqueName: \"kubernetes.io/projected/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-kube-api-access-fg9j9\") on node \"crc\" DevicePath \"\"" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.083168 4802 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.083183 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.083195 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9517a29-8dbc-4973-bdbf-67bf3f9bddde-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.473691 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" event={"ID":"a9517a29-8dbc-4973-bdbf-67bf3f9bddde","Type":"ContainerDied","Data":"2aacf5e165c2c17a4ea313b754c2bfd560d0877a3dea3fc439c8272ad6c645d1"} Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.473729 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aacf5e165c2c17a4ea313b754c2bfd560d0877a3dea3fc439c8272ad6c645d1" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.473785 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vcbwm" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.564284 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb"] Oct 04 05:29:39 crc kubenswrapper[4802]: E1004 05:29:39.565124 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9517a29-8dbc-4973-bdbf-67bf3f9bddde" containerName="ssh-known-hosts-edpm-deployment" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.565154 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9517a29-8dbc-4973-bdbf-67bf3f9bddde" containerName="ssh-known-hosts-edpm-deployment" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.565394 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9517a29-8dbc-4973-bdbf-67bf3f9bddde" containerName="ssh-known-hosts-edpm-deployment" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.566667 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.569052 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.569361 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.569830 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.572158 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.572408 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.572926 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb"] Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.695787 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8hftb\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.696195 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8hftb\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.696359 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8hftb\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.696557 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ffbs\" (UniqueName: \"kubernetes.io/projected/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-kube-api-access-9ffbs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8hftb\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.797654 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8hftb\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.797784 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8hftb\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.797842 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8hftb\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.797873 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ffbs\" (UniqueName: \"kubernetes.io/projected/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-kube-api-access-9ffbs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8hftb\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.802787 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8hftb\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.802918 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8hftb\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.802964 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8hftb\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.819509 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ffbs\" (UniqueName: \"kubernetes.io/projected/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-kube-api-access-9ffbs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8hftb\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:39 crc kubenswrapper[4802]: I1004 05:29:39.893367 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:40 crc kubenswrapper[4802]: I1004 05:29:40.372901 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb"] Oct 04 05:29:40 crc kubenswrapper[4802]: I1004 05:29:40.484402 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" event={"ID":"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce","Type":"ContainerStarted","Data":"e7f256e5f68fc6704a7a28f31a7a7e30d80253945888657a034bfbd91d9fa300"} Oct 04 05:29:41 crc kubenswrapper[4802]: I1004 05:29:41.493898 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" event={"ID":"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce","Type":"ContainerStarted","Data":"e5cd6f6133680bc9547de6b98c1981c9aefa4e1e08adfaf3e1fa41627ff56f26"} Oct 04 05:29:41 crc kubenswrapper[4802]: I1004 05:29:41.510521 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" podStartSLOduration=1.915762068 podStartE2EDuration="2.510500848s" podCreationTimestamp="2025-10-04 05:29:39 +0000 UTC" firstStartedPulling="2025-10-04 05:29:40.385692311 +0000 UTC m=+2622.793692936" lastFinishedPulling="2025-10-04 05:29:40.980431091 +0000 UTC m=+2623.388431716" observedRunningTime="2025-10-04 05:29:41.507153951 +0000 UTC m=+2623.915154576" watchObservedRunningTime="2025-10-04 05:29:41.510500848 +0000 UTC m=+2623.918501473" Oct 04 05:29:48 crc kubenswrapper[4802]: I1004 05:29:48.365600 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:29:48 crc kubenswrapper[4802]: E1004 05:29:48.366507 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:29:48 crc kubenswrapper[4802]: I1004 05:29:48.555264 4802 generic.go:334] "Generic (PLEG): container finished" podID="7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce" containerID="e5cd6f6133680bc9547de6b98c1981c9aefa4e1e08adfaf3e1fa41627ff56f26" exitCode=0 Oct 04 05:29:48 crc kubenswrapper[4802]: I1004 05:29:48.555312 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" event={"ID":"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce","Type":"ContainerDied","Data":"e5cd6f6133680bc9547de6b98c1981c9aefa4e1e08adfaf3e1fa41627ff56f26"} Oct 04 05:29:49 crc kubenswrapper[4802]: I1004 05:29:49.939148 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:49 crc kubenswrapper[4802]: I1004 05:29:49.981870 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-inventory\") pod \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " Oct 04 05:29:49 crc kubenswrapper[4802]: I1004 05:29:49.981936 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ffbs\" (UniqueName: \"kubernetes.io/projected/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-kube-api-access-9ffbs\") pod \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " Oct 04 05:29:49 crc kubenswrapper[4802]: I1004 05:29:49.982013 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-ceph\") pod \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " Oct 04 05:29:49 crc kubenswrapper[4802]: I1004 05:29:49.982084 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-ssh-key\") pod \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\" (UID: \"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce\") " Oct 04 05:29:49 crc kubenswrapper[4802]: I1004 05:29:49.987677 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-ceph" (OuterVolumeSpecName: "ceph") pod "7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce" (UID: "7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:29:49 crc kubenswrapper[4802]: I1004 05:29:49.995533 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-kube-api-access-9ffbs" (OuterVolumeSpecName: "kube-api-access-9ffbs") pod "7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce" (UID: "7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce"). InnerVolumeSpecName "kube-api-access-9ffbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.008615 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce" (UID: "7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.011611 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-inventory" (OuterVolumeSpecName: "inventory") pod "7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce" (UID: "7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.083702 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.083737 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.083747 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.083756 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ffbs\" (UniqueName: \"kubernetes.io/projected/7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce-kube-api-access-9ffbs\") on node \"crc\" DevicePath \"\"" Oct 04 05:29:50 crc kubenswrapper[4802]: E1004 05:29:50.428088 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b7f1a35_8c31_4ec4_8ab0_778aabb8a7ce.slice/crio-e7f256e5f68fc6704a7a28f31a7a7e30d80253945888657a034bfbd91d9fa300\": RecentStats: unable to find data in memory cache]" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.574635 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" event={"ID":"7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce","Type":"ContainerDied","Data":"e7f256e5f68fc6704a7a28f31a7a7e30d80253945888657a034bfbd91d9fa300"} Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.574944 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7f256e5f68fc6704a7a28f31a7a7e30d80253945888657a034bfbd91d9fa300" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.574714 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8hftb" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.653216 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77"] Oct 04 05:29:50 crc kubenswrapper[4802]: E1004 05:29:50.654080 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.654111 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.654442 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.655589 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.660152 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.660875 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.661159 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.661350 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.661542 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.666754 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77"] Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.694929 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bft77\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.694992 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bft77\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.695335 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5hz5\" (UniqueName: \"kubernetes.io/projected/86da9375-0b75-4d5b-8519-e2cba79ba8f2-kube-api-access-h5hz5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bft77\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.695480 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bft77\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.797597 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5hz5\" (UniqueName: \"kubernetes.io/projected/86da9375-0b75-4d5b-8519-e2cba79ba8f2-kube-api-access-h5hz5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bft77\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.797700 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bft77\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.797757 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bft77\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.797789 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bft77\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.803633 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bft77\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.803699 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bft77\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.804530 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bft77\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.817671 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5hz5\" (UniqueName: \"kubernetes.io/projected/86da9375-0b75-4d5b-8519-e2cba79ba8f2-kube-api-access-h5hz5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bft77\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:50 crc kubenswrapper[4802]: I1004 05:29:50.988668 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:29:51 crc kubenswrapper[4802]: I1004 05:29:51.457293 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77"] Oct 04 05:29:51 crc kubenswrapper[4802]: I1004 05:29:51.584426 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" event={"ID":"86da9375-0b75-4d5b-8519-e2cba79ba8f2","Type":"ContainerStarted","Data":"84ea3deaa90767e5a6e17795eba7690bb4e1a59429e47782eb76280a377c1d43"} Oct 04 05:29:52 crc kubenswrapper[4802]: I1004 05:29:52.593695 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" event={"ID":"86da9375-0b75-4d5b-8519-e2cba79ba8f2","Type":"ContainerStarted","Data":"b891adb78cf9a33094ea276565a41f675104c5835f9b51ac7ebe30b3fd29d823"} Oct 04 05:29:52 crc kubenswrapper[4802]: I1004 05:29:52.612838 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" podStartSLOduration=2.057593351 podStartE2EDuration="2.612819353s" podCreationTimestamp="2025-10-04 05:29:50 +0000 UTC" firstStartedPulling="2025-10-04 05:29:51.463129479 +0000 UTC m=+2633.871130104" lastFinishedPulling="2025-10-04 05:29:52.018355481 +0000 UTC m=+2634.426356106" observedRunningTime="2025-10-04 05:29:52.609146407 +0000 UTC m=+2635.017147042" watchObservedRunningTime="2025-10-04 05:29:52.612819353 +0000 UTC m=+2635.020819978" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.134273 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq"] Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.137416 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.140879 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.140898 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.155485 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq"] Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.164734 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a183a9b2-c2e9-489e-aabe-0ce929a2682c-secret-volume\") pod \"collect-profiles-29325930-jw2hq\" (UID: \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.164848 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfrt9\" (UniqueName: \"kubernetes.io/projected/a183a9b2-c2e9-489e-aabe-0ce929a2682c-kube-api-access-pfrt9\") pod \"collect-profiles-29325930-jw2hq\" (UID: \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.164909 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a183a9b2-c2e9-489e-aabe-0ce929a2682c-config-volume\") pod \"collect-profiles-29325930-jw2hq\" (UID: \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.265946 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a183a9b2-c2e9-489e-aabe-0ce929a2682c-secret-volume\") pod \"collect-profiles-29325930-jw2hq\" (UID: \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.266351 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfrt9\" (UniqueName: \"kubernetes.io/projected/a183a9b2-c2e9-489e-aabe-0ce929a2682c-kube-api-access-pfrt9\") pod \"collect-profiles-29325930-jw2hq\" (UID: \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.266517 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a183a9b2-c2e9-489e-aabe-0ce929a2682c-config-volume\") pod \"collect-profiles-29325930-jw2hq\" (UID: \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.267430 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a183a9b2-c2e9-489e-aabe-0ce929a2682c-config-volume\") pod \"collect-profiles-29325930-jw2hq\" (UID: \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.273507 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a183a9b2-c2e9-489e-aabe-0ce929a2682c-secret-volume\") pod \"collect-profiles-29325930-jw2hq\" (UID: \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.282686 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfrt9\" (UniqueName: \"kubernetes.io/projected/a183a9b2-c2e9-489e-aabe-0ce929a2682c-kube-api-access-pfrt9\") pod \"collect-profiles-29325930-jw2hq\" (UID: \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.460140 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" Oct 04 05:30:00 crc kubenswrapper[4802]: I1004 05:30:00.908404 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq"] Oct 04 05:30:01 crc kubenswrapper[4802]: I1004 05:30:01.666624 4802 generic.go:334] "Generic (PLEG): container finished" podID="a183a9b2-c2e9-489e-aabe-0ce929a2682c" containerID="ac67e2d89ae3a27e7833d6fc9a12df2694bb6a45315c80fda81e6f80a34ab7b3" exitCode=0 Oct 04 05:30:01 crc kubenswrapper[4802]: I1004 05:30:01.666701 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" event={"ID":"a183a9b2-c2e9-489e-aabe-0ce929a2682c","Type":"ContainerDied","Data":"ac67e2d89ae3a27e7833d6fc9a12df2694bb6a45315c80fda81e6f80a34ab7b3"} Oct 04 05:30:01 crc kubenswrapper[4802]: I1004 05:30:01.666956 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" event={"ID":"a183a9b2-c2e9-489e-aabe-0ce929a2682c","Type":"ContainerStarted","Data":"337e459bbdb3579b508fb2f57e6b7f57780837fd7aa554e3408d5f25eee19371"} Oct 04 05:30:01 crc kubenswrapper[4802]: I1004 05:30:01.673174 4802 generic.go:334] "Generic (PLEG): container finished" podID="86da9375-0b75-4d5b-8519-e2cba79ba8f2" containerID="b891adb78cf9a33094ea276565a41f675104c5835f9b51ac7ebe30b3fd29d823" exitCode=0 Oct 04 05:30:01 crc kubenswrapper[4802]: I1004 05:30:01.673238 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" event={"ID":"86da9375-0b75-4d5b-8519-e2cba79ba8f2","Type":"ContainerDied","Data":"b891adb78cf9a33094ea276565a41f675104c5835f9b51ac7ebe30b3fd29d823"} Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.059546 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.069698 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.110085 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a183a9b2-c2e9-489e-aabe-0ce929a2682c-secret-volume\") pod \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\" (UID: \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\") " Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.110148 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-ceph\") pod \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.110192 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5hz5\" (UniqueName: \"kubernetes.io/projected/86da9375-0b75-4d5b-8519-e2cba79ba8f2-kube-api-access-h5hz5\") pod \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.110218 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-inventory\") pod \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.110254 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfrt9\" (UniqueName: \"kubernetes.io/projected/a183a9b2-c2e9-489e-aabe-0ce929a2682c-kube-api-access-pfrt9\") pod \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\" (UID: \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\") " Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.110346 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a183a9b2-c2e9-489e-aabe-0ce929a2682c-config-volume\") pod \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\" (UID: \"a183a9b2-c2e9-489e-aabe-0ce929a2682c\") " Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.110386 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-ssh-key\") pod \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\" (UID: \"86da9375-0b75-4d5b-8519-e2cba79ba8f2\") " Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.111206 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a183a9b2-c2e9-489e-aabe-0ce929a2682c-config-volume" (OuterVolumeSpecName: "config-volume") pod "a183a9b2-c2e9-489e-aabe-0ce929a2682c" (UID: "a183a9b2-c2e9-489e-aabe-0ce929a2682c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.115827 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a183a9b2-c2e9-489e-aabe-0ce929a2682c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a183a9b2-c2e9-489e-aabe-0ce929a2682c" (UID: "a183a9b2-c2e9-489e-aabe-0ce929a2682c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.116367 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a183a9b2-c2e9-489e-aabe-0ce929a2682c-kube-api-access-pfrt9" (OuterVolumeSpecName: "kube-api-access-pfrt9") pod "a183a9b2-c2e9-489e-aabe-0ce929a2682c" (UID: "a183a9b2-c2e9-489e-aabe-0ce929a2682c"). InnerVolumeSpecName "kube-api-access-pfrt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.118795 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-ceph" (OuterVolumeSpecName: "ceph") pod "86da9375-0b75-4d5b-8519-e2cba79ba8f2" (UID: "86da9375-0b75-4d5b-8519-e2cba79ba8f2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.118825 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86da9375-0b75-4d5b-8519-e2cba79ba8f2-kube-api-access-h5hz5" (OuterVolumeSpecName: "kube-api-access-h5hz5") pod "86da9375-0b75-4d5b-8519-e2cba79ba8f2" (UID: "86da9375-0b75-4d5b-8519-e2cba79ba8f2"). InnerVolumeSpecName "kube-api-access-h5hz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.134669 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "86da9375-0b75-4d5b-8519-e2cba79ba8f2" (UID: "86da9375-0b75-4d5b-8519-e2cba79ba8f2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.135107 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-inventory" (OuterVolumeSpecName: "inventory") pod "86da9375-0b75-4d5b-8519-e2cba79ba8f2" (UID: "86da9375-0b75-4d5b-8519-e2cba79ba8f2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.211569 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.211617 4802 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a183a9b2-c2e9-489e-aabe-0ce929a2682c-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.211637 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.211665 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5hz5\" (UniqueName: \"kubernetes.io/projected/86da9375-0b75-4d5b-8519-e2cba79ba8f2-kube-api-access-h5hz5\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.211678 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86da9375-0b75-4d5b-8519-e2cba79ba8f2-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.211688 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfrt9\" (UniqueName: \"kubernetes.io/projected/a183a9b2-c2e9-489e-aabe-0ce929a2682c-kube-api-access-pfrt9\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.211700 4802 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a183a9b2-c2e9-489e-aabe-0ce929a2682c-config-volume\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.360262 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:30:03 crc kubenswrapper[4802]: E1004 05:30:03.360748 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.690618 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" event={"ID":"a183a9b2-c2e9-489e-aabe-0ce929a2682c","Type":"ContainerDied","Data":"337e459bbdb3579b508fb2f57e6b7f57780837fd7aa554e3408d5f25eee19371"} Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.690670 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="337e459bbdb3579b508fb2f57e6b7f57780837fd7aa554e3408d5f25eee19371" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.690665 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.692618 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" event={"ID":"86da9375-0b75-4d5b-8519-e2cba79ba8f2","Type":"ContainerDied","Data":"84ea3deaa90767e5a6e17795eba7690bb4e1a59429e47782eb76280a377c1d43"} Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.692660 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84ea3deaa90767e5a6e17795eba7690bb4e1a59429e47782eb76280a377c1d43" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.692732 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bft77" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.780767 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw"] Oct 04 05:30:03 crc kubenswrapper[4802]: E1004 05:30:03.781182 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a183a9b2-c2e9-489e-aabe-0ce929a2682c" containerName="collect-profiles" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.781200 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a183a9b2-c2e9-489e-aabe-0ce929a2682c" containerName="collect-profiles" Oct 04 05:30:03 crc kubenswrapper[4802]: E1004 05:30:03.781230 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86da9375-0b75-4d5b-8519-e2cba79ba8f2" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.781238 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="86da9375-0b75-4d5b-8519-e2cba79ba8f2" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.781400 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="a183a9b2-c2e9-489e-aabe-0ce929a2682c" containerName="collect-profiles" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.781418 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="86da9375-0b75-4d5b-8519-e2cba79ba8f2" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.782093 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.784541 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.784949 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.784955 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.784994 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.784963 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.784986 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.785106 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.785131 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.791222 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw"] Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.822327 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.822373 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.822394 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.822422 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.822592 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.822666 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.822736 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.822792 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.822833 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.822850 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.822893 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.823065 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khhxt\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-kube-api-access-khhxt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.823118 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.924366 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khhxt\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-kube-api-access-khhxt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.924434 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.924480 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.924512 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.924531 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.924554 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.924592 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.924618 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.924659 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.924683 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.924706 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.924721 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.924747 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.929712 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.930183 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.930324 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.930339 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.930554 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.931343 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.932231 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.940807 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.941310 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.941469 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.942017 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.944720 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khhxt\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-kube-api-access-khhxt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:03 crc kubenswrapper[4802]: I1004 05:30:03.959224 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:04 crc kubenswrapper[4802]: I1004 05:30:04.099103 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:04 crc kubenswrapper[4802]: I1004 05:30:04.149099 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk"] Oct 04 05:30:04 crc kubenswrapper[4802]: I1004 05:30:04.157978 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325885-fq6vk"] Oct 04 05:30:04 crc kubenswrapper[4802]: I1004 05:30:04.371091 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7b15b12-47c1-4b49-8851-4e01097927d8" path="/var/lib/kubelet/pods/d7b15b12-47c1-4b49-8851-4e01097927d8/volumes" Oct 04 05:30:04 crc kubenswrapper[4802]: I1004 05:30:04.627542 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw"] Oct 04 05:30:04 crc kubenswrapper[4802]: W1004 05:30:04.630370 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0448f188_480e_42a0_9b37_e06b990c17bd.slice/crio-c438ea2486d3910c6756cabe94c9f27ccfd30dcaa32771dbb01ad0eb4fe0d911 WatchSource:0}: Error finding container c438ea2486d3910c6756cabe94c9f27ccfd30dcaa32771dbb01ad0eb4fe0d911: Status 404 returned error can't find the container with id c438ea2486d3910c6756cabe94c9f27ccfd30dcaa32771dbb01ad0eb4fe0d911 Oct 04 05:30:04 crc kubenswrapper[4802]: I1004 05:30:04.704529 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" event={"ID":"0448f188-480e-42a0-9b37-e06b990c17bd","Type":"ContainerStarted","Data":"c438ea2486d3910c6756cabe94c9f27ccfd30dcaa32771dbb01ad0eb4fe0d911"} Oct 04 05:30:05 crc kubenswrapper[4802]: I1004 05:30:05.713101 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" event={"ID":"0448f188-480e-42a0-9b37-e06b990c17bd","Type":"ContainerStarted","Data":"1d4c4579a46acec00eb923dea22123fa56c4444592755e461a5ab31b31e5217f"} Oct 04 05:30:05 crc kubenswrapper[4802]: I1004 05:30:05.741239 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" podStartSLOduration=2.294249239 podStartE2EDuration="2.741221202s" podCreationTimestamp="2025-10-04 05:30:03 +0000 UTC" firstStartedPulling="2025-10-04 05:30:04.632820598 +0000 UTC m=+2647.040821233" lastFinishedPulling="2025-10-04 05:30:05.079792561 +0000 UTC m=+2647.487793196" observedRunningTime="2025-10-04 05:30:05.728971189 +0000 UTC m=+2648.136971814" watchObservedRunningTime="2025-10-04 05:30:05.741221202 +0000 UTC m=+2648.149221827" Oct 04 05:30:14 crc kubenswrapper[4802]: I1004 05:30:14.426582 4802 scope.go:117] "RemoveContainer" containerID="b5b7af3ddffc549f41b44587e1f16b02bde9dd98ebb37093ba52bac3ee56e5e8" Oct 04 05:30:17 crc kubenswrapper[4802]: I1004 05:30:17.359709 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:30:17 crc kubenswrapper[4802]: E1004 05:30:17.360285 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:30:31 crc kubenswrapper[4802]: I1004 05:30:31.360209 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:30:31 crc kubenswrapper[4802]: E1004 05:30:31.361061 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:30:33 crc kubenswrapper[4802]: I1004 05:30:33.940928 4802 generic.go:334] "Generic (PLEG): container finished" podID="0448f188-480e-42a0-9b37-e06b990c17bd" containerID="1d4c4579a46acec00eb923dea22123fa56c4444592755e461a5ab31b31e5217f" exitCode=0 Oct 04 05:30:33 crc kubenswrapper[4802]: I1004 05:30:33.941024 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" event={"ID":"0448f188-480e-42a0-9b37-e06b990c17bd","Type":"ContainerDied","Data":"1d4c4579a46acec00eb923dea22123fa56c4444592755e461a5ab31b31e5217f"} Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.327661 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.481533 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"0448f188-480e-42a0-9b37-e06b990c17bd\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.481583 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-repo-setup-combined-ca-bundle\") pod \"0448f188-480e-42a0-9b37-e06b990c17bd\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.481656 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ceph\") pod \"0448f188-480e-42a0-9b37-e06b990c17bd\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.481674 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-libvirt-combined-ca-bundle\") pod \"0448f188-480e-42a0-9b37-e06b990c17bd\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.481693 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"0448f188-480e-42a0-9b37-e06b990c17bd\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.481716 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ovn-combined-ca-bundle\") pod \"0448f188-480e-42a0-9b37-e06b990c17bd\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.481741 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-bootstrap-combined-ca-bundle\") pod \"0448f188-480e-42a0-9b37-e06b990c17bd\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.481804 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khhxt\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-kube-api-access-khhxt\") pod \"0448f188-480e-42a0-9b37-e06b990c17bd\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.482964 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-neutron-metadata-combined-ca-bundle\") pod \"0448f188-480e-42a0-9b37-e06b990c17bd\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.482991 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ssh-key\") pod \"0448f188-480e-42a0-9b37-e06b990c17bd\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.483021 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"0448f188-480e-42a0-9b37-e06b990c17bd\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.483086 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-nova-combined-ca-bundle\") pod \"0448f188-480e-42a0-9b37-e06b990c17bd\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.483108 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-inventory\") pod \"0448f188-480e-42a0-9b37-e06b990c17bd\" (UID: \"0448f188-480e-42a0-9b37-e06b990c17bd\") " Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.487931 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0448f188-480e-42a0-9b37-e06b990c17bd" (UID: "0448f188-480e-42a0-9b37-e06b990c17bd"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.487982 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0448f188-480e-42a0-9b37-e06b990c17bd" (UID: "0448f188-480e-42a0-9b37-e06b990c17bd"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.488236 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "0448f188-480e-42a0-9b37-e06b990c17bd" (UID: "0448f188-480e-42a0-9b37-e06b990c17bd"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.489569 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "0448f188-480e-42a0-9b37-e06b990c17bd" (UID: "0448f188-480e-42a0-9b37-e06b990c17bd"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.489622 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ceph" (OuterVolumeSpecName: "ceph") pod "0448f188-480e-42a0-9b37-e06b990c17bd" (UID: "0448f188-480e-42a0-9b37-e06b990c17bd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.490111 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0448f188-480e-42a0-9b37-e06b990c17bd" (UID: "0448f188-480e-42a0-9b37-e06b990c17bd"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.490172 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-kube-api-access-khhxt" (OuterVolumeSpecName: "kube-api-access-khhxt") pod "0448f188-480e-42a0-9b37-e06b990c17bd" (UID: "0448f188-480e-42a0-9b37-e06b990c17bd"). InnerVolumeSpecName "kube-api-access-khhxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.490244 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "0448f188-480e-42a0-9b37-e06b990c17bd" (UID: "0448f188-480e-42a0-9b37-e06b990c17bd"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.490423 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0448f188-480e-42a0-9b37-e06b990c17bd" (UID: "0448f188-480e-42a0-9b37-e06b990c17bd"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.495768 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "0448f188-480e-42a0-9b37-e06b990c17bd" (UID: "0448f188-480e-42a0-9b37-e06b990c17bd"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.496053 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "0448f188-480e-42a0-9b37-e06b990c17bd" (UID: "0448f188-480e-42a0-9b37-e06b990c17bd"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.511596 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-inventory" (OuterVolumeSpecName: "inventory") pod "0448f188-480e-42a0-9b37-e06b990c17bd" (UID: "0448f188-480e-42a0-9b37-e06b990c17bd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.516913 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0448f188-480e-42a0-9b37-e06b990c17bd" (UID: "0448f188-480e-42a0-9b37-e06b990c17bd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.584620 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khhxt\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-kube-api-access-khhxt\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.584735 4802 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.584759 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.584777 4802 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.584797 4802 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.584813 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.584829 4802 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.584845 4802 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.584859 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.584920 4802 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.584952 4802 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0448f188-480e-42a0-9b37-e06b990c17bd-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.584970 4802 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.584986 4802 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0448f188-480e-42a0-9b37-e06b990c17bd-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.959473 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" event={"ID":"0448f188-480e-42a0-9b37-e06b990c17bd","Type":"ContainerDied","Data":"c438ea2486d3910c6756cabe94c9f27ccfd30dcaa32771dbb01ad0eb4fe0d911"} Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.959508 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c438ea2486d3910c6756cabe94c9f27ccfd30dcaa32771dbb01ad0eb4fe0d911" Oct 04 05:30:35 crc kubenswrapper[4802]: I1004 05:30:35.959807 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.052456 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8"] Oct 04 05:30:36 crc kubenswrapper[4802]: E1004 05:30:36.052934 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0448f188-480e-42a0-9b37-e06b990c17bd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.052957 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="0448f188-480e-42a0-9b37-e06b990c17bd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.053139 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="0448f188-480e-42a0-9b37-e06b990c17bd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.053699 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.055950 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.056314 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.058489 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.058671 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.059009 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.078777 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8"] Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.092529 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh6lm\" (UniqueName: \"kubernetes.io/projected/37f49664-6da8-4406-8f02-db640b6bcbd1-kube-api-access-qh6lm\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.092630 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.092697 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.092775 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.193789 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.193856 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh6lm\" (UniqueName: \"kubernetes.io/projected/37f49664-6da8-4406-8f02-db640b6bcbd1-kube-api-access-qh6lm\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.193906 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.193932 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.197866 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.198347 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.204437 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.212255 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh6lm\" (UniqueName: \"kubernetes.io/projected/37f49664-6da8-4406-8f02-db640b6bcbd1-kube-api-access-qh6lm\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.373575 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.875825 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8"] Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.888812 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 05:30:36 crc kubenswrapper[4802]: I1004 05:30:36.969092 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" event={"ID":"37f49664-6da8-4406-8f02-db640b6bcbd1","Type":"ContainerStarted","Data":"efed7df52a1d629813644a56270be7bff0961374b27c367a5dccc455148ee16c"} Oct 04 05:30:37 crc kubenswrapper[4802]: I1004 05:30:37.977016 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" event={"ID":"37f49664-6da8-4406-8f02-db640b6bcbd1","Type":"ContainerStarted","Data":"05e7e4a162811b701e7f283f68bc4db8b420e9b2a1401bfe36558a0c6ba06d3c"} Oct 04 05:30:38 crc kubenswrapper[4802]: I1004 05:30:38.026276 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" podStartSLOduration=1.609837336 podStartE2EDuration="2.026247879s" podCreationTimestamp="2025-10-04 05:30:36 +0000 UTC" firstStartedPulling="2025-10-04 05:30:36.888563551 +0000 UTC m=+2679.296564176" lastFinishedPulling="2025-10-04 05:30:37.304974084 +0000 UTC m=+2679.712974719" observedRunningTime="2025-10-04 05:30:38.008755345 +0000 UTC m=+2680.416755970" watchObservedRunningTime="2025-10-04 05:30:38.026247879 +0000 UTC m=+2680.434248544" Oct 04 05:30:43 crc kubenswrapper[4802]: I1004 05:30:43.020074 4802 generic.go:334] "Generic (PLEG): container finished" podID="37f49664-6da8-4406-8f02-db640b6bcbd1" containerID="05e7e4a162811b701e7f283f68bc4db8b420e9b2a1401bfe36558a0c6ba06d3c" exitCode=0 Oct 04 05:30:43 crc kubenswrapper[4802]: I1004 05:30:43.020161 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" event={"ID":"37f49664-6da8-4406-8f02-db640b6bcbd1","Type":"ContainerDied","Data":"05e7e4a162811b701e7f283f68bc4db8b420e9b2a1401bfe36558a0c6ba06d3c"} Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.221161 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6kglq"] Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.226274 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.242007 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6kglq"] Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.341493 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e734b7f-1189-4868-82b0-cd3168214587-utilities\") pod \"redhat-operators-6kglq\" (UID: \"4e734b7f-1189-4868-82b0-cd3168214587\") " pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.341588 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt8vd\" (UniqueName: \"kubernetes.io/projected/4e734b7f-1189-4868-82b0-cd3168214587-kube-api-access-zt8vd\") pod \"redhat-operators-6kglq\" (UID: \"4e734b7f-1189-4868-82b0-cd3168214587\") " pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.341912 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e734b7f-1189-4868-82b0-cd3168214587-catalog-content\") pod \"redhat-operators-6kglq\" (UID: \"4e734b7f-1189-4868-82b0-cd3168214587\") " pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.443361 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e734b7f-1189-4868-82b0-cd3168214587-catalog-content\") pod \"redhat-operators-6kglq\" (UID: \"4e734b7f-1189-4868-82b0-cd3168214587\") " pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.443542 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e734b7f-1189-4868-82b0-cd3168214587-utilities\") pod \"redhat-operators-6kglq\" (UID: \"4e734b7f-1189-4868-82b0-cd3168214587\") " pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.443615 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8vd\" (UniqueName: \"kubernetes.io/projected/4e734b7f-1189-4868-82b0-cd3168214587-kube-api-access-zt8vd\") pod \"redhat-operators-6kglq\" (UID: \"4e734b7f-1189-4868-82b0-cd3168214587\") " pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.444575 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e734b7f-1189-4868-82b0-cd3168214587-catalog-content\") pod \"redhat-operators-6kglq\" (UID: \"4e734b7f-1189-4868-82b0-cd3168214587\") " pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.444683 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e734b7f-1189-4868-82b0-cd3168214587-utilities\") pod \"redhat-operators-6kglq\" (UID: \"4e734b7f-1189-4868-82b0-cd3168214587\") " pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.444828 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.465148 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt8vd\" (UniqueName: \"kubernetes.io/projected/4e734b7f-1189-4868-82b0-cd3168214587-kube-api-access-zt8vd\") pod \"redhat-operators-6kglq\" (UID: \"4e734b7f-1189-4868-82b0-cd3168214587\") " pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.545216 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-inventory\") pod \"37f49664-6da8-4406-8f02-db640b6bcbd1\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.545299 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh6lm\" (UniqueName: \"kubernetes.io/projected/37f49664-6da8-4406-8f02-db640b6bcbd1-kube-api-access-qh6lm\") pod \"37f49664-6da8-4406-8f02-db640b6bcbd1\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.545434 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-ssh-key\") pod \"37f49664-6da8-4406-8f02-db640b6bcbd1\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.545457 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-ceph\") pod \"37f49664-6da8-4406-8f02-db640b6bcbd1\" (UID: \"37f49664-6da8-4406-8f02-db640b6bcbd1\") " Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.550209 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-ceph" (OuterVolumeSpecName: "ceph") pod "37f49664-6da8-4406-8f02-db640b6bcbd1" (UID: "37f49664-6da8-4406-8f02-db640b6bcbd1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.553353 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f49664-6da8-4406-8f02-db640b6bcbd1-kube-api-access-qh6lm" (OuterVolumeSpecName: "kube-api-access-qh6lm") pod "37f49664-6da8-4406-8f02-db640b6bcbd1" (UID: "37f49664-6da8-4406-8f02-db640b6bcbd1"). InnerVolumeSpecName "kube-api-access-qh6lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.559443 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.578144 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "37f49664-6da8-4406-8f02-db640b6bcbd1" (UID: "37f49664-6da8-4406-8f02-db640b6bcbd1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.578255 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-inventory" (OuterVolumeSpecName: "inventory") pod "37f49664-6da8-4406-8f02-db640b6bcbd1" (UID: "37f49664-6da8-4406-8f02-db640b6bcbd1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.647416 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.647447 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.647456 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37f49664-6da8-4406-8f02-db640b6bcbd1-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:44 crc kubenswrapper[4802]: I1004 05:30:44.647467 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh6lm\" (UniqueName: \"kubernetes.io/projected/37f49664-6da8-4406-8f02-db640b6bcbd1-kube-api-access-qh6lm\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.026668 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6kglq"] Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.038963 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" event={"ID":"37f49664-6da8-4406-8f02-db640b6bcbd1","Type":"ContainerDied","Data":"efed7df52a1d629813644a56270be7bff0961374b27c367a5dccc455148ee16c"} Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.039344 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efed7df52a1d629813644a56270be7bff0961374b27c367a5dccc455148ee16c" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.039567 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.143937 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl"] Oct 04 05:30:45 crc kubenswrapper[4802]: E1004 05:30:45.144305 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f49664-6da8-4406-8f02-db640b6bcbd1" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.144321 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f49664-6da8-4406-8f02-db640b6bcbd1" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.144489 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="37f49664-6da8-4406-8f02-db640b6bcbd1" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.145080 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.150265 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.150502 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.150682 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.150990 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.151091 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.151266 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.182233 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl"] Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.265323 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.265411 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.265616 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.265702 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxgkd\" (UniqueName: \"kubernetes.io/projected/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-kube-api-access-fxgkd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.265783 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.265861 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.359476 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:30:45 crc kubenswrapper[4802]: E1004 05:30:45.359688 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.367217 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.367273 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxgkd\" (UniqueName: \"kubernetes.io/projected/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-kube-api-access-fxgkd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.367316 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.367338 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.367364 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.367388 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.368531 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.373182 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.373368 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.373716 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.373878 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.390869 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxgkd\" (UniqueName: \"kubernetes.io/projected/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-kube-api-access-fxgkd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-brppl\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:45 crc kubenswrapper[4802]: I1004 05:30:45.475560 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:30:46 crc kubenswrapper[4802]: I1004 05:30:45.964992 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl"] Oct 04 05:30:46 crc kubenswrapper[4802]: W1004 05:30:45.969368 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad62a6a8_f574_42b8_b558_fe1f7bf9d36a.slice/crio-8dd6dedb1ccfb158936b5d700cdd3c6da4bc52c059dff6148169a110fcc52456 WatchSource:0}: Error finding container 8dd6dedb1ccfb158936b5d700cdd3c6da4bc52c059dff6148169a110fcc52456: Status 404 returned error can't find the container with id 8dd6dedb1ccfb158936b5d700cdd3c6da4bc52c059dff6148169a110fcc52456 Oct 04 05:30:46 crc kubenswrapper[4802]: I1004 05:30:46.047146 4802 generic.go:334] "Generic (PLEG): container finished" podID="4e734b7f-1189-4868-82b0-cd3168214587" containerID="eaa2c8e8e095aa3a7fe054a4df5996f9b5542c5dedd9309e8d2081f1f71683e2" exitCode=0 Oct 04 05:30:46 crc kubenswrapper[4802]: I1004 05:30:46.047196 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kglq" event={"ID":"4e734b7f-1189-4868-82b0-cd3168214587","Type":"ContainerDied","Data":"eaa2c8e8e095aa3a7fe054a4df5996f9b5542c5dedd9309e8d2081f1f71683e2"} Oct 04 05:30:46 crc kubenswrapper[4802]: I1004 05:30:46.047246 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kglq" event={"ID":"4e734b7f-1189-4868-82b0-cd3168214587","Type":"ContainerStarted","Data":"978ec1271de50ea93d32d3c0f0124bdeef947cb085305245ecab0af64b9d0ea5"} Oct 04 05:30:46 crc kubenswrapper[4802]: I1004 05:30:46.048334 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" event={"ID":"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a","Type":"ContainerStarted","Data":"8dd6dedb1ccfb158936b5d700cdd3c6da4bc52c059dff6148169a110fcc52456"} Oct 04 05:30:47 crc kubenswrapper[4802]: I1004 05:30:47.061334 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" event={"ID":"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a","Type":"ContainerStarted","Data":"3eb070eddab0f3a7c45f15a9fcad04a8d8ea420cc8014539362a922a4a1df8a6"} Oct 04 05:30:47 crc kubenswrapper[4802]: I1004 05:30:47.067092 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kglq" event={"ID":"4e734b7f-1189-4868-82b0-cd3168214587","Type":"ContainerStarted","Data":"78f0ddf13ed3e635b03d59fc65398d9c4ffc530fb8f305ac81fa0816897ea75a"} Oct 04 05:30:47 crc kubenswrapper[4802]: I1004 05:30:47.098195 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" podStartSLOduration=1.647351899 podStartE2EDuration="2.098158813s" podCreationTimestamp="2025-10-04 05:30:45 +0000 UTC" firstStartedPulling="2025-10-04 05:30:45.971346691 +0000 UTC m=+2688.379347316" lastFinishedPulling="2025-10-04 05:30:46.422153605 +0000 UTC m=+2688.830154230" observedRunningTime="2025-10-04 05:30:47.088018081 +0000 UTC m=+2689.496018716" watchObservedRunningTime="2025-10-04 05:30:47.098158813 +0000 UTC m=+2689.506159438" Oct 04 05:30:48 crc kubenswrapper[4802]: I1004 05:30:48.080611 4802 generic.go:334] "Generic (PLEG): container finished" podID="4e734b7f-1189-4868-82b0-cd3168214587" containerID="78f0ddf13ed3e635b03d59fc65398d9c4ffc530fb8f305ac81fa0816897ea75a" exitCode=0 Oct 04 05:30:48 crc kubenswrapper[4802]: I1004 05:30:48.080681 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kglq" event={"ID":"4e734b7f-1189-4868-82b0-cd3168214587","Type":"ContainerDied","Data":"78f0ddf13ed3e635b03d59fc65398d9c4ffc530fb8f305ac81fa0816897ea75a"} Oct 04 05:30:49 crc kubenswrapper[4802]: I1004 05:30:49.092494 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kglq" event={"ID":"4e734b7f-1189-4868-82b0-cd3168214587","Type":"ContainerStarted","Data":"f164d141af4755c3e514b647fd5cb39eb06b5c894b857750be7310159bd1858b"} Oct 04 05:30:49 crc kubenswrapper[4802]: I1004 05:30:49.111964 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6kglq" podStartSLOduration=2.583256695 podStartE2EDuration="5.111938014s" podCreationTimestamp="2025-10-04 05:30:44 +0000 UTC" firstStartedPulling="2025-10-04 05:30:46.048545464 +0000 UTC m=+2688.456546089" lastFinishedPulling="2025-10-04 05:30:48.577226773 +0000 UTC m=+2690.985227408" observedRunningTime="2025-10-04 05:30:49.111134571 +0000 UTC m=+2691.519135196" watchObservedRunningTime="2025-10-04 05:30:49.111938014 +0000 UTC m=+2691.519938659" Oct 04 05:30:54 crc kubenswrapper[4802]: I1004 05:30:54.559998 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:54 crc kubenswrapper[4802]: I1004 05:30:54.561689 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:54 crc kubenswrapper[4802]: I1004 05:30:54.607476 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:55 crc kubenswrapper[4802]: I1004 05:30:55.229139 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:55 crc kubenswrapper[4802]: I1004 05:30:55.295997 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6kglq"] Oct 04 05:30:57 crc kubenswrapper[4802]: I1004 05:30:57.169511 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6kglq" podUID="4e734b7f-1189-4868-82b0-cd3168214587" containerName="registry-server" containerID="cri-o://f164d141af4755c3e514b647fd5cb39eb06b5c894b857750be7310159bd1858b" gracePeriod=2 Oct 04 05:30:57 crc kubenswrapper[4802]: I1004 05:30:57.634085 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:57 crc kubenswrapper[4802]: I1004 05:30:57.712634 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e734b7f-1189-4868-82b0-cd3168214587-catalog-content\") pod \"4e734b7f-1189-4868-82b0-cd3168214587\" (UID: \"4e734b7f-1189-4868-82b0-cd3168214587\") " Oct 04 05:30:57 crc kubenswrapper[4802]: I1004 05:30:57.712770 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt8vd\" (UniqueName: \"kubernetes.io/projected/4e734b7f-1189-4868-82b0-cd3168214587-kube-api-access-zt8vd\") pod \"4e734b7f-1189-4868-82b0-cd3168214587\" (UID: \"4e734b7f-1189-4868-82b0-cd3168214587\") " Oct 04 05:30:57 crc kubenswrapper[4802]: I1004 05:30:57.712825 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e734b7f-1189-4868-82b0-cd3168214587-utilities\") pod \"4e734b7f-1189-4868-82b0-cd3168214587\" (UID: \"4e734b7f-1189-4868-82b0-cd3168214587\") " Oct 04 05:30:57 crc kubenswrapper[4802]: I1004 05:30:57.713657 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e734b7f-1189-4868-82b0-cd3168214587-utilities" (OuterVolumeSpecName: "utilities") pod "4e734b7f-1189-4868-82b0-cd3168214587" (UID: "4e734b7f-1189-4868-82b0-cd3168214587"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:30:57 crc kubenswrapper[4802]: I1004 05:30:57.714111 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e734b7f-1189-4868-82b0-cd3168214587-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:57 crc kubenswrapper[4802]: I1004 05:30:57.721774 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e734b7f-1189-4868-82b0-cd3168214587-kube-api-access-zt8vd" (OuterVolumeSpecName: "kube-api-access-zt8vd") pod "4e734b7f-1189-4868-82b0-cd3168214587" (UID: "4e734b7f-1189-4868-82b0-cd3168214587"). InnerVolumeSpecName "kube-api-access-zt8vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:30:57 crc kubenswrapper[4802]: I1004 05:30:57.798027 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e734b7f-1189-4868-82b0-cd3168214587-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e734b7f-1189-4868-82b0-cd3168214587" (UID: "4e734b7f-1189-4868-82b0-cd3168214587"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:30:57 crc kubenswrapper[4802]: I1004 05:30:57.815747 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt8vd\" (UniqueName: \"kubernetes.io/projected/4e734b7f-1189-4868-82b0-cd3168214587-kube-api-access-zt8vd\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:57 crc kubenswrapper[4802]: I1004 05:30:57.815777 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e734b7f-1189-4868-82b0-cd3168214587-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.178763 4802 generic.go:334] "Generic (PLEG): container finished" podID="4e734b7f-1189-4868-82b0-cd3168214587" containerID="f164d141af4755c3e514b647fd5cb39eb06b5c894b857750be7310159bd1858b" exitCode=0 Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.178850 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kglq" event={"ID":"4e734b7f-1189-4868-82b0-cd3168214587","Type":"ContainerDied","Data":"f164d141af4755c3e514b647fd5cb39eb06b5c894b857750be7310159bd1858b"} Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.178907 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kglq" event={"ID":"4e734b7f-1189-4868-82b0-cd3168214587","Type":"ContainerDied","Data":"978ec1271de50ea93d32d3c0f0124bdeef947cb085305245ecab0af64b9d0ea5"} Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.178949 4802 scope.go:117] "RemoveContainer" containerID="f164d141af4755c3e514b647fd5cb39eb06b5c894b857750be7310159bd1858b" Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.179272 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kglq" Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.201952 4802 scope.go:117] "RemoveContainer" containerID="78f0ddf13ed3e635b03d59fc65398d9c4ffc530fb8f305ac81fa0816897ea75a" Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.255400 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6kglq"] Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.257165 4802 scope.go:117] "RemoveContainer" containerID="eaa2c8e8e095aa3a7fe054a4df5996f9b5542c5dedd9309e8d2081f1f71683e2" Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.265265 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6kglq"] Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.288845 4802 scope.go:117] "RemoveContainer" containerID="f164d141af4755c3e514b647fd5cb39eb06b5c894b857750be7310159bd1858b" Oct 04 05:30:58 crc kubenswrapper[4802]: E1004 05:30:58.289458 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f164d141af4755c3e514b647fd5cb39eb06b5c894b857750be7310159bd1858b\": container with ID starting with f164d141af4755c3e514b647fd5cb39eb06b5c894b857750be7310159bd1858b not found: ID does not exist" containerID="f164d141af4755c3e514b647fd5cb39eb06b5c894b857750be7310159bd1858b" Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.289513 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f164d141af4755c3e514b647fd5cb39eb06b5c894b857750be7310159bd1858b"} err="failed to get container status \"f164d141af4755c3e514b647fd5cb39eb06b5c894b857750be7310159bd1858b\": rpc error: code = NotFound desc = could not find container \"f164d141af4755c3e514b647fd5cb39eb06b5c894b857750be7310159bd1858b\": container with ID starting with f164d141af4755c3e514b647fd5cb39eb06b5c894b857750be7310159bd1858b not found: ID does not exist" Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.289552 4802 scope.go:117] "RemoveContainer" containerID="78f0ddf13ed3e635b03d59fc65398d9c4ffc530fb8f305ac81fa0816897ea75a" Oct 04 05:30:58 crc kubenswrapper[4802]: E1004 05:30:58.289929 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78f0ddf13ed3e635b03d59fc65398d9c4ffc530fb8f305ac81fa0816897ea75a\": container with ID starting with 78f0ddf13ed3e635b03d59fc65398d9c4ffc530fb8f305ac81fa0816897ea75a not found: ID does not exist" containerID="78f0ddf13ed3e635b03d59fc65398d9c4ffc530fb8f305ac81fa0816897ea75a" Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.289999 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78f0ddf13ed3e635b03d59fc65398d9c4ffc530fb8f305ac81fa0816897ea75a"} err="failed to get container status \"78f0ddf13ed3e635b03d59fc65398d9c4ffc530fb8f305ac81fa0816897ea75a\": rpc error: code = NotFound desc = could not find container \"78f0ddf13ed3e635b03d59fc65398d9c4ffc530fb8f305ac81fa0816897ea75a\": container with ID starting with 78f0ddf13ed3e635b03d59fc65398d9c4ffc530fb8f305ac81fa0816897ea75a not found: ID does not exist" Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.290026 4802 scope.go:117] "RemoveContainer" containerID="eaa2c8e8e095aa3a7fe054a4df5996f9b5542c5dedd9309e8d2081f1f71683e2" Oct 04 05:30:58 crc kubenswrapper[4802]: E1004 05:30:58.290285 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaa2c8e8e095aa3a7fe054a4df5996f9b5542c5dedd9309e8d2081f1f71683e2\": container with ID starting with eaa2c8e8e095aa3a7fe054a4df5996f9b5542c5dedd9309e8d2081f1f71683e2 not found: ID does not exist" containerID="eaa2c8e8e095aa3a7fe054a4df5996f9b5542c5dedd9309e8d2081f1f71683e2" Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.290303 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaa2c8e8e095aa3a7fe054a4df5996f9b5542c5dedd9309e8d2081f1f71683e2"} err="failed to get container status \"eaa2c8e8e095aa3a7fe054a4df5996f9b5542c5dedd9309e8d2081f1f71683e2\": rpc error: code = NotFound desc = could not find container \"eaa2c8e8e095aa3a7fe054a4df5996f9b5542c5dedd9309e8d2081f1f71683e2\": container with ID starting with eaa2c8e8e095aa3a7fe054a4df5996f9b5542c5dedd9309e8d2081f1f71683e2 not found: ID does not exist" Oct 04 05:30:58 crc kubenswrapper[4802]: I1004 05:30:58.370195 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e734b7f-1189-4868-82b0-cd3168214587" path="/var/lib/kubelet/pods/4e734b7f-1189-4868-82b0-cd3168214587/volumes" Oct 04 05:31:00 crc kubenswrapper[4802]: I1004 05:31:00.360712 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:31:00 crc kubenswrapper[4802]: E1004 05:31:00.361008 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:31:15 crc kubenswrapper[4802]: I1004 05:31:15.360604 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:31:15 crc kubenswrapper[4802]: E1004 05:31:15.361807 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:31:29 crc kubenswrapper[4802]: I1004 05:31:29.359555 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:31:30 crc kubenswrapper[4802]: I1004 05:31:30.443576 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"c402b9be756f7c07beeb9015d7a8cb4d938d61c476b98861866855d2ce212e4e"} Oct 04 05:32:00 crc kubenswrapper[4802]: I1004 05:32:00.673020 4802 generic.go:334] "Generic (PLEG): container finished" podID="ad62a6a8-f574-42b8-b558-fe1f7bf9d36a" containerID="3eb070eddab0f3a7c45f15a9fcad04a8d8ea420cc8014539362a922a4a1df8a6" exitCode=0 Oct 04 05:32:00 crc kubenswrapper[4802]: I1004 05:32:00.673097 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" event={"ID":"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a","Type":"ContainerDied","Data":"3eb070eddab0f3a7c45f15a9fcad04a8d8ea420cc8014539362a922a4a1df8a6"} Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.098266 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.210987 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-inventory\") pod \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.211423 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ovncontroller-config-0\") pod \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.211448 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ceph\") pod \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.211462 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ssh-key\") pod \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.211505 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxgkd\" (UniqueName: \"kubernetes.io/projected/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-kube-api-access-fxgkd\") pod \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.211549 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ovn-combined-ca-bundle\") pod \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\" (UID: \"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a\") " Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.217353 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ceph" (OuterVolumeSpecName: "ceph") pod "ad62a6a8-f574-42b8-b558-fe1f7bf9d36a" (UID: "ad62a6a8-f574-42b8-b558-fe1f7bf9d36a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.218842 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ad62a6a8-f574-42b8-b558-fe1f7bf9d36a" (UID: "ad62a6a8-f574-42b8-b558-fe1f7bf9d36a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.218887 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-kube-api-access-fxgkd" (OuterVolumeSpecName: "kube-api-access-fxgkd") pod "ad62a6a8-f574-42b8-b558-fe1f7bf9d36a" (UID: "ad62a6a8-f574-42b8-b558-fe1f7bf9d36a"). InnerVolumeSpecName "kube-api-access-fxgkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.239774 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ad62a6a8-f574-42b8-b558-fe1f7bf9d36a" (UID: "ad62a6a8-f574-42b8-b558-fe1f7bf9d36a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.244709 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ad62a6a8-f574-42b8-b558-fe1f7bf9d36a" (UID: "ad62a6a8-f574-42b8-b558-fe1f7bf9d36a"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.266479 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-inventory" (OuterVolumeSpecName: "inventory") pod "ad62a6a8-f574-42b8-b558-fe1f7bf9d36a" (UID: "ad62a6a8-f574-42b8-b558-fe1f7bf9d36a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.313489 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxgkd\" (UniqueName: \"kubernetes.io/projected/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-kube-api-access-fxgkd\") on node \"crc\" DevicePath \"\"" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.313576 4802 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.313592 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.313601 4802 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.313610 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.313618 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad62a6a8-f574-42b8-b558-fe1f7bf9d36a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.694340 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" event={"ID":"ad62a6a8-f574-42b8-b558-fe1f7bf9d36a","Type":"ContainerDied","Data":"8dd6dedb1ccfb158936b5d700cdd3c6da4bc52c059dff6148169a110fcc52456"} Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.694389 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dd6dedb1ccfb158936b5d700cdd3c6da4bc52c059dff6148169a110fcc52456" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.694464 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-brppl" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.770341 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz"] Oct 04 05:32:02 crc kubenswrapper[4802]: E1004 05:32:02.770995 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e734b7f-1189-4868-82b0-cd3168214587" containerName="extract-content" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.771008 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e734b7f-1189-4868-82b0-cd3168214587" containerName="extract-content" Oct 04 05:32:02 crc kubenswrapper[4802]: E1004 05:32:02.771026 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e734b7f-1189-4868-82b0-cd3168214587" containerName="extract-utilities" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.771032 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e734b7f-1189-4868-82b0-cd3168214587" containerName="extract-utilities" Oct 04 05:32:02 crc kubenswrapper[4802]: E1004 05:32:02.771052 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e734b7f-1189-4868-82b0-cd3168214587" containerName="registry-server" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.771059 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e734b7f-1189-4868-82b0-cd3168214587" containerName="registry-server" Oct 04 05:32:02 crc kubenswrapper[4802]: E1004 05:32:02.771079 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad62a6a8-f574-42b8-b558-fe1f7bf9d36a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.771086 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad62a6a8-f574-42b8-b558-fe1f7bf9d36a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.771242 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad62a6a8-f574-42b8-b558-fe1f7bf9d36a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.771253 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e734b7f-1189-4868-82b0-cd3168214587" containerName="registry-server" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.772158 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.775900 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.776033 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.776205 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.776273 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.776330 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.776459 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.776580 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.787403 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz"] Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.924222 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.924271 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.924969 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.925041 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.925104 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.925126 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gc25\" (UniqueName: \"kubernetes.io/projected/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-kube-api-access-9gc25\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:02 crc kubenswrapper[4802]: I1004 05:32:02.925321 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.027391 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gc25\" (UniqueName: \"kubernetes.io/projected/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-kube-api-access-9gc25\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.027521 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.027666 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.027699 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.027756 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.027795 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.027838 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.031112 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.031470 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.032036 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.032212 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.032621 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.032836 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.043719 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gc25\" (UniqueName: \"kubernetes.io/projected/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-kube-api-access-9gc25\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.097175 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.593474 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz"] Oct 04 05:32:03 crc kubenswrapper[4802]: W1004 05:32:03.599359 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1c28ae5_e04b_489e_96fe_aab0c804d5b4.slice/crio-2778afe132ebe121249d3d0119a67df98da91220ae1dfa64d701e8a875b6a289 WatchSource:0}: Error finding container 2778afe132ebe121249d3d0119a67df98da91220ae1dfa64d701e8a875b6a289: Status 404 returned error can't find the container with id 2778afe132ebe121249d3d0119a67df98da91220ae1dfa64d701e8a875b6a289 Oct 04 05:32:03 crc kubenswrapper[4802]: I1004 05:32:03.705004 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" event={"ID":"e1c28ae5-e04b-489e-96fe-aab0c804d5b4","Type":"ContainerStarted","Data":"2778afe132ebe121249d3d0119a67df98da91220ae1dfa64d701e8a875b6a289"} Oct 04 05:32:04 crc kubenswrapper[4802]: I1004 05:32:04.714682 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" event={"ID":"e1c28ae5-e04b-489e-96fe-aab0c804d5b4","Type":"ContainerStarted","Data":"636da5fd7128399430d53d480ee0209959431c007a4c142bfbdfdf6a003ae344"} Oct 04 05:32:04 crc kubenswrapper[4802]: I1004 05:32:04.731491 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" podStartSLOduration=2.112740449 podStartE2EDuration="2.731473711s" podCreationTimestamp="2025-10-04 05:32:02 +0000 UTC" firstStartedPulling="2025-10-04 05:32:03.605068857 +0000 UTC m=+2766.013069482" lastFinishedPulling="2025-10-04 05:32:04.223802119 +0000 UTC m=+2766.631802744" observedRunningTime="2025-10-04 05:32:04.728965408 +0000 UTC m=+2767.136966033" watchObservedRunningTime="2025-10-04 05:32:04.731473711 +0000 UTC m=+2767.139474336" Oct 04 05:32:48 crc kubenswrapper[4802]: I1004 05:32:48.751348 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b9c9h"] Oct 04 05:32:48 crc kubenswrapper[4802]: I1004 05:32:48.756673 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:48 crc kubenswrapper[4802]: I1004 05:32:48.785911 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9c9h"] Oct 04 05:32:48 crc kubenswrapper[4802]: I1004 05:32:48.853362 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7461c63a-7221-4f60-b37c-bea93131da05-catalog-content\") pod \"redhat-marketplace-b9c9h\" (UID: \"7461c63a-7221-4f60-b37c-bea93131da05\") " pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:48 crc kubenswrapper[4802]: I1004 05:32:48.853524 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7461c63a-7221-4f60-b37c-bea93131da05-utilities\") pod \"redhat-marketplace-b9c9h\" (UID: \"7461c63a-7221-4f60-b37c-bea93131da05\") " pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:48 crc kubenswrapper[4802]: I1004 05:32:48.853561 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mthkr\" (UniqueName: \"kubernetes.io/projected/7461c63a-7221-4f60-b37c-bea93131da05-kube-api-access-mthkr\") pod \"redhat-marketplace-b9c9h\" (UID: \"7461c63a-7221-4f60-b37c-bea93131da05\") " pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:48 crc kubenswrapper[4802]: I1004 05:32:48.955418 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7461c63a-7221-4f60-b37c-bea93131da05-utilities\") pod \"redhat-marketplace-b9c9h\" (UID: \"7461c63a-7221-4f60-b37c-bea93131da05\") " pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:48 crc kubenswrapper[4802]: I1004 05:32:48.955483 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mthkr\" (UniqueName: \"kubernetes.io/projected/7461c63a-7221-4f60-b37c-bea93131da05-kube-api-access-mthkr\") pod \"redhat-marketplace-b9c9h\" (UID: \"7461c63a-7221-4f60-b37c-bea93131da05\") " pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:48 crc kubenswrapper[4802]: I1004 05:32:48.955523 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7461c63a-7221-4f60-b37c-bea93131da05-catalog-content\") pod \"redhat-marketplace-b9c9h\" (UID: \"7461c63a-7221-4f60-b37c-bea93131da05\") " pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:48 crc kubenswrapper[4802]: I1004 05:32:48.956135 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7461c63a-7221-4f60-b37c-bea93131da05-catalog-content\") pod \"redhat-marketplace-b9c9h\" (UID: \"7461c63a-7221-4f60-b37c-bea93131da05\") " pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:48 crc kubenswrapper[4802]: I1004 05:32:48.956291 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7461c63a-7221-4f60-b37c-bea93131da05-utilities\") pod \"redhat-marketplace-b9c9h\" (UID: \"7461c63a-7221-4f60-b37c-bea93131da05\") " pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:48 crc kubenswrapper[4802]: I1004 05:32:48.980149 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mthkr\" (UniqueName: \"kubernetes.io/projected/7461c63a-7221-4f60-b37c-bea93131da05-kube-api-access-mthkr\") pod \"redhat-marketplace-b9c9h\" (UID: \"7461c63a-7221-4f60-b37c-bea93131da05\") " pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:49 crc kubenswrapper[4802]: I1004 05:32:49.091303 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:49 crc kubenswrapper[4802]: I1004 05:32:49.532959 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9c9h"] Oct 04 05:32:50 crc kubenswrapper[4802]: I1004 05:32:50.145503 4802 generic.go:334] "Generic (PLEG): container finished" podID="7461c63a-7221-4f60-b37c-bea93131da05" containerID="a7c88ae9393d8e9a9b31f264d0e487362dfddeb7b1c6db037f73013d5af671cb" exitCode=0 Oct 04 05:32:50 crc kubenswrapper[4802]: I1004 05:32:50.145592 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9c9h" event={"ID":"7461c63a-7221-4f60-b37c-bea93131da05","Type":"ContainerDied","Data":"a7c88ae9393d8e9a9b31f264d0e487362dfddeb7b1c6db037f73013d5af671cb"} Oct 04 05:32:50 crc kubenswrapper[4802]: I1004 05:32:50.145790 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9c9h" event={"ID":"7461c63a-7221-4f60-b37c-bea93131da05","Type":"ContainerStarted","Data":"1e9746d0032880264c33ee1330f6a4520418ee9e958bd6b9f6e8060f5956e3a7"} Oct 04 05:32:51 crc kubenswrapper[4802]: I1004 05:32:51.156519 4802 generic.go:334] "Generic (PLEG): container finished" podID="7461c63a-7221-4f60-b37c-bea93131da05" containerID="472f63b64bd6be3abb3ef6ef22ba0a532322b6b03a9fc99e0d49f869838970d0" exitCode=0 Oct 04 05:32:51 crc kubenswrapper[4802]: I1004 05:32:51.156565 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9c9h" event={"ID":"7461c63a-7221-4f60-b37c-bea93131da05","Type":"ContainerDied","Data":"472f63b64bd6be3abb3ef6ef22ba0a532322b6b03a9fc99e0d49f869838970d0"} Oct 04 05:32:52 crc kubenswrapper[4802]: I1004 05:32:52.168953 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9c9h" event={"ID":"7461c63a-7221-4f60-b37c-bea93131da05","Type":"ContainerStarted","Data":"d15795b8503d1a6403159c209ed72d03d1726ba850a277723ba70dc543a90e0d"} Oct 04 05:32:52 crc kubenswrapper[4802]: I1004 05:32:52.191221 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b9c9h" podStartSLOduration=2.796868301 podStartE2EDuration="4.191197895s" podCreationTimestamp="2025-10-04 05:32:48 +0000 UTC" firstStartedPulling="2025-10-04 05:32:50.14726865 +0000 UTC m=+2812.555269275" lastFinishedPulling="2025-10-04 05:32:51.541598244 +0000 UTC m=+2813.949598869" observedRunningTime="2025-10-04 05:32:52.184920814 +0000 UTC m=+2814.592921439" watchObservedRunningTime="2025-10-04 05:32:52.191197895 +0000 UTC m=+2814.599198520" Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.115586 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fkgfv"] Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.117893 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.130801 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkgfv"] Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.302141 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79b1edc-f043-486b-846f-989f2791b3e9-catalog-content\") pod \"community-operators-fkgfv\" (UID: \"c79b1edc-f043-486b-846f-989f2791b3e9\") " pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.302198 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79b1edc-f043-486b-846f-989f2791b3e9-utilities\") pod \"community-operators-fkgfv\" (UID: \"c79b1edc-f043-486b-846f-989f2791b3e9\") " pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.302456 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tnzl\" (UniqueName: \"kubernetes.io/projected/c79b1edc-f043-486b-846f-989f2791b3e9-kube-api-access-6tnzl\") pod \"community-operators-fkgfv\" (UID: \"c79b1edc-f043-486b-846f-989f2791b3e9\") " pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.404283 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tnzl\" (UniqueName: \"kubernetes.io/projected/c79b1edc-f043-486b-846f-989f2791b3e9-kube-api-access-6tnzl\") pod \"community-operators-fkgfv\" (UID: \"c79b1edc-f043-486b-846f-989f2791b3e9\") " pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.404417 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79b1edc-f043-486b-846f-989f2791b3e9-catalog-content\") pod \"community-operators-fkgfv\" (UID: \"c79b1edc-f043-486b-846f-989f2791b3e9\") " pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.404446 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79b1edc-f043-486b-846f-989f2791b3e9-utilities\") pod \"community-operators-fkgfv\" (UID: \"c79b1edc-f043-486b-846f-989f2791b3e9\") " pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.404977 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79b1edc-f043-486b-846f-989f2791b3e9-catalog-content\") pod \"community-operators-fkgfv\" (UID: \"c79b1edc-f043-486b-846f-989f2791b3e9\") " pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.405030 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79b1edc-f043-486b-846f-989f2791b3e9-utilities\") pod \"community-operators-fkgfv\" (UID: \"c79b1edc-f043-486b-846f-989f2791b3e9\") " pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.430403 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tnzl\" (UniqueName: \"kubernetes.io/projected/c79b1edc-f043-486b-846f-989f2791b3e9-kube-api-access-6tnzl\") pod \"community-operators-fkgfv\" (UID: \"c79b1edc-f043-486b-846f-989f2791b3e9\") " pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.442352 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:32:56 crc kubenswrapper[4802]: I1004 05:32:56.986287 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkgfv"] Oct 04 05:32:57 crc kubenswrapper[4802]: I1004 05:32:57.214753 4802 generic.go:334] "Generic (PLEG): container finished" podID="c79b1edc-f043-486b-846f-989f2791b3e9" containerID="45c57a66d5ffb69297d7ead50f5fc7ba5da58e4c084f8c7669c2306c028cee6e" exitCode=0 Oct 04 05:32:57 crc kubenswrapper[4802]: I1004 05:32:57.214798 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkgfv" event={"ID":"c79b1edc-f043-486b-846f-989f2791b3e9","Type":"ContainerDied","Data":"45c57a66d5ffb69297d7ead50f5fc7ba5da58e4c084f8c7669c2306c028cee6e"} Oct 04 05:32:57 crc kubenswrapper[4802]: I1004 05:32:57.214858 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkgfv" event={"ID":"c79b1edc-f043-486b-846f-989f2791b3e9","Type":"ContainerStarted","Data":"ebd83b95fcaf85174e2c9351df1d8a59d9967f3797002cce658526594d645733"} Oct 04 05:32:59 crc kubenswrapper[4802]: I1004 05:32:59.092334 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:59 crc kubenswrapper[4802]: I1004 05:32:59.092629 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:59 crc kubenswrapper[4802]: I1004 05:32:59.138753 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:32:59 crc kubenswrapper[4802]: I1004 05:32:59.277136 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:33:00 crc kubenswrapper[4802]: I1004 05:33:00.491212 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9c9h"] Oct 04 05:33:01 crc kubenswrapper[4802]: I1004 05:33:01.248392 4802 generic.go:334] "Generic (PLEG): container finished" podID="c79b1edc-f043-486b-846f-989f2791b3e9" containerID="d062294669411c99dbaa9cd9921419e1d510ba3a5eceb62d522ab200cef2703d" exitCode=0 Oct 04 05:33:01 crc kubenswrapper[4802]: I1004 05:33:01.248445 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkgfv" event={"ID":"c79b1edc-f043-486b-846f-989f2791b3e9","Type":"ContainerDied","Data":"d062294669411c99dbaa9cd9921419e1d510ba3a5eceb62d522ab200cef2703d"} Oct 04 05:33:01 crc kubenswrapper[4802]: I1004 05:33:01.248922 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b9c9h" podUID="7461c63a-7221-4f60-b37c-bea93131da05" containerName="registry-server" containerID="cri-o://d15795b8503d1a6403159c209ed72d03d1726ba850a277723ba70dc543a90e0d" gracePeriod=2 Oct 04 05:33:01 crc kubenswrapper[4802]: I1004 05:33:01.680257 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:33:01 crc kubenswrapper[4802]: I1004 05:33:01.717103 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7461c63a-7221-4f60-b37c-bea93131da05-utilities\") pod \"7461c63a-7221-4f60-b37c-bea93131da05\" (UID: \"7461c63a-7221-4f60-b37c-bea93131da05\") " Oct 04 05:33:01 crc kubenswrapper[4802]: I1004 05:33:01.717275 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mthkr\" (UniqueName: \"kubernetes.io/projected/7461c63a-7221-4f60-b37c-bea93131da05-kube-api-access-mthkr\") pod \"7461c63a-7221-4f60-b37c-bea93131da05\" (UID: \"7461c63a-7221-4f60-b37c-bea93131da05\") " Oct 04 05:33:01 crc kubenswrapper[4802]: I1004 05:33:01.717389 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7461c63a-7221-4f60-b37c-bea93131da05-catalog-content\") pod \"7461c63a-7221-4f60-b37c-bea93131da05\" (UID: \"7461c63a-7221-4f60-b37c-bea93131da05\") " Oct 04 05:33:01 crc kubenswrapper[4802]: I1004 05:33:01.718156 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7461c63a-7221-4f60-b37c-bea93131da05-utilities" (OuterVolumeSpecName: "utilities") pod "7461c63a-7221-4f60-b37c-bea93131da05" (UID: "7461c63a-7221-4f60-b37c-bea93131da05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:33:01 crc kubenswrapper[4802]: I1004 05:33:01.724961 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7461c63a-7221-4f60-b37c-bea93131da05-kube-api-access-mthkr" (OuterVolumeSpecName: "kube-api-access-mthkr") pod "7461c63a-7221-4f60-b37c-bea93131da05" (UID: "7461c63a-7221-4f60-b37c-bea93131da05"). InnerVolumeSpecName "kube-api-access-mthkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:33:01 crc kubenswrapper[4802]: I1004 05:33:01.730541 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7461c63a-7221-4f60-b37c-bea93131da05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7461c63a-7221-4f60-b37c-bea93131da05" (UID: "7461c63a-7221-4f60-b37c-bea93131da05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:33:01 crc kubenswrapper[4802]: I1004 05:33:01.824269 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mthkr\" (UniqueName: \"kubernetes.io/projected/7461c63a-7221-4f60-b37c-bea93131da05-kube-api-access-mthkr\") on node \"crc\" DevicePath \"\"" Oct 04 05:33:01 crc kubenswrapper[4802]: I1004 05:33:01.824314 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7461c63a-7221-4f60-b37c-bea93131da05-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:33:01 crc kubenswrapper[4802]: I1004 05:33:01.824333 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7461c63a-7221-4f60-b37c-bea93131da05-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.258605 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkgfv" event={"ID":"c79b1edc-f043-486b-846f-989f2791b3e9","Type":"ContainerStarted","Data":"5ba955f72e3076528dc30a848832ba6c39d72b58f819a9c5617d837dc823e1ff"} Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.261282 4802 generic.go:334] "Generic (PLEG): container finished" podID="7461c63a-7221-4f60-b37c-bea93131da05" containerID="d15795b8503d1a6403159c209ed72d03d1726ba850a277723ba70dc543a90e0d" exitCode=0 Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.261354 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9c9h" event={"ID":"7461c63a-7221-4f60-b37c-bea93131da05","Type":"ContainerDied","Data":"d15795b8503d1a6403159c209ed72d03d1726ba850a277723ba70dc543a90e0d"} Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.261385 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9c9h" Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.261415 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9c9h" event={"ID":"7461c63a-7221-4f60-b37c-bea93131da05","Type":"ContainerDied","Data":"1e9746d0032880264c33ee1330f6a4520418ee9e958bd6b9f6e8060f5956e3a7"} Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.261441 4802 scope.go:117] "RemoveContainer" containerID="d15795b8503d1a6403159c209ed72d03d1726ba850a277723ba70dc543a90e0d" Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.283474 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fkgfv" podStartSLOduration=1.835042041 podStartE2EDuration="6.283454774s" podCreationTimestamp="2025-10-04 05:32:56 +0000 UTC" firstStartedPulling="2025-10-04 05:32:57.21635023 +0000 UTC m=+2819.624350855" lastFinishedPulling="2025-10-04 05:33:01.664762963 +0000 UTC m=+2824.072763588" observedRunningTime="2025-10-04 05:33:02.275055222 +0000 UTC m=+2824.683055867" watchObservedRunningTime="2025-10-04 05:33:02.283454774 +0000 UTC m=+2824.691455399" Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.296786 4802 scope.go:117] "RemoveContainer" containerID="472f63b64bd6be3abb3ef6ef22ba0a532322b6b03a9fc99e0d49f869838970d0" Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.297701 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9c9h"] Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.306860 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9c9h"] Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.329728 4802 scope.go:117] "RemoveContainer" containerID="a7c88ae9393d8e9a9b31f264d0e487362dfddeb7b1c6db037f73013d5af671cb" Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.368259 4802 scope.go:117] "RemoveContainer" containerID="d15795b8503d1a6403159c209ed72d03d1726ba850a277723ba70dc543a90e0d" Oct 04 05:33:02 crc kubenswrapper[4802]: E1004 05:33:02.368764 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d15795b8503d1a6403159c209ed72d03d1726ba850a277723ba70dc543a90e0d\": container with ID starting with d15795b8503d1a6403159c209ed72d03d1726ba850a277723ba70dc543a90e0d not found: ID does not exist" containerID="d15795b8503d1a6403159c209ed72d03d1726ba850a277723ba70dc543a90e0d" Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.368807 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d15795b8503d1a6403159c209ed72d03d1726ba850a277723ba70dc543a90e0d"} err="failed to get container status \"d15795b8503d1a6403159c209ed72d03d1726ba850a277723ba70dc543a90e0d\": rpc error: code = NotFound desc = could not find container \"d15795b8503d1a6403159c209ed72d03d1726ba850a277723ba70dc543a90e0d\": container with ID starting with d15795b8503d1a6403159c209ed72d03d1726ba850a277723ba70dc543a90e0d not found: ID does not exist" Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.368837 4802 scope.go:117] "RemoveContainer" containerID="472f63b64bd6be3abb3ef6ef22ba0a532322b6b03a9fc99e0d49f869838970d0" Oct 04 05:33:02 crc kubenswrapper[4802]: E1004 05:33:02.369153 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"472f63b64bd6be3abb3ef6ef22ba0a532322b6b03a9fc99e0d49f869838970d0\": container with ID starting with 472f63b64bd6be3abb3ef6ef22ba0a532322b6b03a9fc99e0d49f869838970d0 not found: ID does not exist" containerID="472f63b64bd6be3abb3ef6ef22ba0a532322b6b03a9fc99e0d49f869838970d0" Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.369207 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472f63b64bd6be3abb3ef6ef22ba0a532322b6b03a9fc99e0d49f869838970d0"} err="failed to get container status \"472f63b64bd6be3abb3ef6ef22ba0a532322b6b03a9fc99e0d49f869838970d0\": rpc error: code = NotFound desc = could not find container \"472f63b64bd6be3abb3ef6ef22ba0a532322b6b03a9fc99e0d49f869838970d0\": container with ID starting with 472f63b64bd6be3abb3ef6ef22ba0a532322b6b03a9fc99e0d49f869838970d0 not found: ID does not exist" Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.369224 4802 scope.go:117] "RemoveContainer" containerID="a7c88ae9393d8e9a9b31f264d0e487362dfddeb7b1c6db037f73013d5af671cb" Oct 04 05:33:02 crc kubenswrapper[4802]: E1004 05:33:02.369572 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7c88ae9393d8e9a9b31f264d0e487362dfddeb7b1c6db037f73013d5af671cb\": container with ID starting with a7c88ae9393d8e9a9b31f264d0e487362dfddeb7b1c6db037f73013d5af671cb not found: ID does not exist" containerID="a7c88ae9393d8e9a9b31f264d0e487362dfddeb7b1c6db037f73013d5af671cb" Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.369605 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c88ae9393d8e9a9b31f264d0e487362dfddeb7b1c6db037f73013d5af671cb"} err="failed to get container status \"a7c88ae9393d8e9a9b31f264d0e487362dfddeb7b1c6db037f73013d5af671cb\": rpc error: code = NotFound desc = could not find container \"a7c88ae9393d8e9a9b31f264d0e487362dfddeb7b1c6db037f73013d5af671cb\": container with ID starting with a7c88ae9393d8e9a9b31f264d0e487362dfddeb7b1c6db037f73013d5af671cb not found: ID does not exist" Oct 04 05:33:02 crc kubenswrapper[4802]: I1004 05:33:02.373169 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7461c63a-7221-4f60-b37c-bea93131da05" path="/var/lib/kubelet/pods/7461c63a-7221-4f60-b37c-bea93131da05/volumes" Oct 04 05:33:06 crc kubenswrapper[4802]: I1004 05:33:06.443513 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:33:06 crc kubenswrapper[4802]: I1004 05:33:06.444014 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:33:06 crc kubenswrapper[4802]: I1004 05:33:06.501138 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:33:07 crc kubenswrapper[4802]: I1004 05:33:07.360975 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fkgfv" Oct 04 05:33:07 crc kubenswrapper[4802]: I1004 05:33:07.766850 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkgfv"] Oct 04 05:33:07 crc kubenswrapper[4802]: I1004 05:33:07.892545 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vl5xn"] Oct 04 05:33:07 crc kubenswrapper[4802]: I1004 05:33:07.892850 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vl5xn" podUID="5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" containerName="registry-server" containerID="cri-o://fc8246cbc61cfd24d70a35ed30bf887a6e489ba2384eb729d171ea35c354db96" gracePeriod=2 Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.319170 4802 generic.go:334] "Generic (PLEG): container finished" podID="5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" containerID="fc8246cbc61cfd24d70a35ed30bf887a6e489ba2384eb729d171ea35c354db96" exitCode=0 Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.319767 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl5xn" event={"ID":"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3","Type":"ContainerDied","Data":"fc8246cbc61cfd24d70a35ed30bf887a6e489ba2384eb729d171ea35c354db96"} Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.319832 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl5xn" event={"ID":"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3","Type":"ContainerDied","Data":"5fb00b5f34f277d62ac59a5e1e1a67c162a33e3e2674e4d1de10025f4b0b8525"} Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.319844 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fb00b5f34f277d62ac59a5e1e1a67c162a33e3e2674e4d1de10025f4b0b8525" Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.330517 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl5xn" Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.450664 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-catalog-content\") pod \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\" (UID: \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\") " Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.450720 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nswm4\" (UniqueName: \"kubernetes.io/projected/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-kube-api-access-nswm4\") pod \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\" (UID: \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\") " Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.450868 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-utilities\") pod \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\" (UID: \"5d3e7f92-a81f-46c2-aadf-7766cb10e2e3\") " Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.451576 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-utilities" (OuterVolumeSpecName: "utilities") pod "5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" (UID: "5d3e7f92-a81f-46c2-aadf-7766cb10e2e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.469820 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-kube-api-access-nswm4" (OuterVolumeSpecName: "kube-api-access-nswm4") pod "5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" (UID: "5d3e7f92-a81f-46c2-aadf-7766cb10e2e3"). InnerVolumeSpecName "kube-api-access-nswm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.492971 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" (UID: "5d3e7f92-a81f-46c2-aadf-7766cb10e2e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.553273 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.553320 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nswm4\" (UniqueName: \"kubernetes.io/projected/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-kube-api-access-nswm4\") on node \"crc\" DevicePath \"\"" Oct 04 05:33:08 crc kubenswrapper[4802]: I1004 05:33:08.553334 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:33:09 crc kubenswrapper[4802]: I1004 05:33:09.328904 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl5xn" Oct 04 05:33:09 crc kubenswrapper[4802]: I1004 05:33:09.360298 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vl5xn"] Oct 04 05:33:09 crc kubenswrapper[4802]: I1004 05:33:09.367214 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vl5xn"] Oct 04 05:33:10 crc kubenswrapper[4802]: I1004 05:33:10.370118 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" path="/var/lib/kubelet/pods/5d3e7f92-a81f-46c2-aadf-7766cb10e2e3/volumes" Oct 04 05:33:13 crc kubenswrapper[4802]: I1004 05:33:13.361428 4802 generic.go:334] "Generic (PLEG): container finished" podID="e1c28ae5-e04b-489e-96fe-aab0c804d5b4" containerID="636da5fd7128399430d53d480ee0209959431c007a4c142bfbdfdf6a003ae344" exitCode=0 Oct 04 05:33:13 crc kubenswrapper[4802]: I1004 05:33:13.361530 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" event={"ID":"e1c28ae5-e04b-489e-96fe-aab0c804d5b4","Type":"ContainerDied","Data":"636da5fd7128399430d53d480ee0209959431c007a4c142bfbdfdf6a003ae344"} Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.528016 4802 scope.go:117] "RemoveContainer" containerID="44cd917cdaa66ef569c085783429c8dec040bfe40d089281b92bbd3dd73e9eeb" Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.554664 4802 scope.go:117] "RemoveContainer" containerID="abe3dc2c715785e5857c5d0d7a18e2f06ed589c36ed28e5e6b6700b0f8b9bdec" Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.596889 4802 scope.go:117] "RemoveContainer" containerID="fc8246cbc61cfd24d70a35ed30bf887a6e489ba2384eb729d171ea35c354db96" Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.783520 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.960719 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-nova-metadata-neutron-config-0\") pod \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.960793 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.960909 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-ceph\") pod \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.960930 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-ssh-key\") pod \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.960979 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-inventory\") pod \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.961021 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-neutron-metadata-combined-ca-bundle\") pod \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.961053 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gc25\" (UniqueName: \"kubernetes.io/projected/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-kube-api-access-9gc25\") pod \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\" (UID: \"e1c28ae5-e04b-489e-96fe-aab0c804d5b4\") " Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.967390 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-kube-api-access-9gc25" (OuterVolumeSpecName: "kube-api-access-9gc25") pod "e1c28ae5-e04b-489e-96fe-aab0c804d5b4" (UID: "e1c28ae5-e04b-489e-96fe-aab0c804d5b4"). InnerVolumeSpecName "kube-api-access-9gc25". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.967437 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e1c28ae5-e04b-489e-96fe-aab0c804d5b4" (UID: "e1c28ae5-e04b-489e-96fe-aab0c804d5b4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.971014 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-ceph" (OuterVolumeSpecName: "ceph") pod "e1c28ae5-e04b-489e-96fe-aab0c804d5b4" (UID: "e1c28ae5-e04b-489e-96fe-aab0c804d5b4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.995562 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e1c28ae5-e04b-489e-96fe-aab0c804d5b4" (UID: "e1c28ae5-e04b-489e-96fe-aab0c804d5b4"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.996655 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e1c28ae5-e04b-489e-96fe-aab0c804d5b4" (UID: "e1c28ae5-e04b-489e-96fe-aab0c804d5b4"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:33:14 crc kubenswrapper[4802]: I1004 05:33:14.998311 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e1c28ae5-e04b-489e-96fe-aab0c804d5b4" (UID: "e1c28ae5-e04b-489e-96fe-aab0c804d5b4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.007896 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-inventory" (OuterVolumeSpecName: "inventory") pod "e1c28ae5-e04b-489e-96fe-aab0c804d5b4" (UID: "e1c28ae5-e04b-489e-96fe-aab0c804d5b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.063037 4802 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.063078 4802 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.063091 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.063101 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.063111 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.063119 4802 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.063129 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gc25\" (UniqueName: \"kubernetes.io/projected/e1c28ae5-e04b-489e-96fe-aab0c804d5b4-kube-api-access-9gc25\") on node \"crc\" DevicePath \"\"" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.382023 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" event={"ID":"e1c28ae5-e04b-489e-96fe-aab0c804d5b4","Type":"ContainerDied","Data":"2778afe132ebe121249d3d0119a67df98da91220ae1dfa64d701e8a875b6a289"} Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.382077 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2778afe132ebe121249d3d0119a67df98da91220ae1dfa64d701e8a875b6a289" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.382152 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.570745 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p"] Oct 04 05:33:15 crc kubenswrapper[4802]: E1004 05:33:15.571084 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c28ae5-e04b-489e-96fe-aab0c804d5b4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.571099 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c28ae5-e04b-489e-96fe-aab0c804d5b4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 04 05:33:15 crc kubenswrapper[4802]: E1004 05:33:15.571121 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" containerName="extract-content" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.571130 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" containerName="extract-content" Oct 04 05:33:15 crc kubenswrapper[4802]: E1004 05:33:15.571160 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" containerName="extract-utilities" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.571168 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" containerName="extract-utilities" Oct 04 05:33:15 crc kubenswrapper[4802]: E1004 05:33:15.571180 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7461c63a-7221-4f60-b37c-bea93131da05" containerName="extract-utilities" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.571188 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="7461c63a-7221-4f60-b37c-bea93131da05" containerName="extract-utilities" Oct 04 05:33:15 crc kubenswrapper[4802]: E1004 05:33:15.571200 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" containerName="registry-server" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.571207 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" containerName="registry-server" Oct 04 05:33:15 crc kubenswrapper[4802]: E1004 05:33:15.571230 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7461c63a-7221-4f60-b37c-bea93131da05" containerName="extract-content" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.571238 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="7461c63a-7221-4f60-b37c-bea93131da05" containerName="extract-content" Oct 04 05:33:15 crc kubenswrapper[4802]: E1004 05:33:15.571253 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7461c63a-7221-4f60-b37c-bea93131da05" containerName="registry-server" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.571261 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="7461c63a-7221-4f60-b37c-bea93131da05" containerName="registry-server" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.571474 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c28ae5-e04b-489e-96fe-aab0c804d5b4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.571505 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d3e7f92-a81f-46c2-aadf-7766cb10e2e3" containerName="registry-server" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.571518 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="7461c63a-7221-4f60-b37c-bea93131da05" containerName="registry-server" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.572198 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.596548 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.596556 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.596809 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.596954 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.596994 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.597519 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.647860 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p"] Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.673114 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.673240 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.673303 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlncp\" (UniqueName: \"kubernetes.io/projected/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-kube-api-access-jlncp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.673389 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.673459 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.673483 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.774535 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.774872 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlncp\" (UniqueName: \"kubernetes.io/projected/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-kube-api-access-jlncp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.775007 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.775131 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.775223 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.775334 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.779867 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.779932 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.780515 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.786562 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.787174 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.792409 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlncp\" (UniqueName: \"kubernetes.io/projected/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-kube-api-access-jlncp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42d2p\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:15 crc kubenswrapper[4802]: I1004 05:33:15.944683 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:33:16 crc kubenswrapper[4802]: I1004 05:33:16.257671 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p"] Oct 04 05:33:16 crc kubenswrapper[4802]: I1004 05:33:16.389941 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" event={"ID":"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde","Type":"ContainerStarted","Data":"51d78f848e5f8ac81621c7e51e097063818deb4243c5e695931445194da1ab2a"} Oct 04 05:33:17 crc kubenswrapper[4802]: I1004 05:33:17.399362 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" event={"ID":"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde","Type":"ContainerStarted","Data":"31c2a8b343fcea66b36f28c619d6d87f182f8d8dab730f34cb91db1c8489c7f5"} Oct 04 05:33:17 crc kubenswrapper[4802]: I1004 05:33:17.430011 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" podStartSLOduration=1.699192684 podStartE2EDuration="2.429985675s" podCreationTimestamp="2025-10-04 05:33:15 +0000 UTC" firstStartedPulling="2025-10-04 05:33:16.264943839 +0000 UTC m=+2838.672944464" lastFinishedPulling="2025-10-04 05:33:16.99573683 +0000 UTC m=+2839.403737455" observedRunningTime="2025-10-04 05:33:17.422071616 +0000 UTC m=+2839.830072251" watchObservedRunningTime="2025-10-04 05:33:17.429985675 +0000 UTC m=+2839.837986330" Oct 04 05:33:52 crc kubenswrapper[4802]: I1004 05:33:52.662411 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:33:52 crc kubenswrapper[4802]: I1004 05:33:52.663136 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:34:17 crc kubenswrapper[4802]: I1004 05:34:17.992943 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5gfbx"] Oct 04 05:34:17 crc kubenswrapper[4802]: I1004 05:34:17.997676 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:18 crc kubenswrapper[4802]: I1004 05:34:18.024524 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5gfbx"] Oct 04 05:34:18 crc kubenswrapper[4802]: I1004 05:34:18.180253 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22547876-a42a-47ed-a637-0aef4b42f4b4-catalog-content\") pod \"certified-operators-5gfbx\" (UID: \"22547876-a42a-47ed-a637-0aef4b42f4b4\") " pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:18 crc kubenswrapper[4802]: I1004 05:34:18.180578 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22547876-a42a-47ed-a637-0aef4b42f4b4-utilities\") pod \"certified-operators-5gfbx\" (UID: \"22547876-a42a-47ed-a637-0aef4b42f4b4\") " pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:18 crc kubenswrapper[4802]: I1004 05:34:18.180742 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnxrl\" (UniqueName: \"kubernetes.io/projected/22547876-a42a-47ed-a637-0aef4b42f4b4-kube-api-access-nnxrl\") pod \"certified-operators-5gfbx\" (UID: \"22547876-a42a-47ed-a637-0aef4b42f4b4\") " pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:18 crc kubenswrapper[4802]: I1004 05:34:18.282888 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22547876-a42a-47ed-a637-0aef4b42f4b4-catalog-content\") pod \"certified-operators-5gfbx\" (UID: \"22547876-a42a-47ed-a637-0aef4b42f4b4\") " pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:18 crc kubenswrapper[4802]: I1004 05:34:18.283006 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22547876-a42a-47ed-a637-0aef4b42f4b4-utilities\") pod \"certified-operators-5gfbx\" (UID: \"22547876-a42a-47ed-a637-0aef4b42f4b4\") " pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:18 crc kubenswrapper[4802]: I1004 05:34:18.283143 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnxrl\" (UniqueName: \"kubernetes.io/projected/22547876-a42a-47ed-a637-0aef4b42f4b4-kube-api-access-nnxrl\") pod \"certified-operators-5gfbx\" (UID: \"22547876-a42a-47ed-a637-0aef4b42f4b4\") " pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:18 crc kubenswrapper[4802]: I1004 05:34:18.283350 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22547876-a42a-47ed-a637-0aef4b42f4b4-catalog-content\") pod \"certified-operators-5gfbx\" (UID: \"22547876-a42a-47ed-a637-0aef4b42f4b4\") " pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:18 crc kubenswrapper[4802]: I1004 05:34:18.283573 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22547876-a42a-47ed-a637-0aef4b42f4b4-utilities\") pod \"certified-operators-5gfbx\" (UID: \"22547876-a42a-47ed-a637-0aef4b42f4b4\") " pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:18 crc kubenswrapper[4802]: I1004 05:34:18.305070 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnxrl\" (UniqueName: \"kubernetes.io/projected/22547876-a42a-47ed-a637-0aef4b42f4b4-kube-api-access-nnxrl\") pod \"certified-operators-5gfbx\" (UID: \"22547876-a42a-47ed-a637-0aef4b42f4b4\") " pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:18 crc kubenswrapper[4802]: I1004 05:34:18.324542 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:18 crc kubenswrapper[4802]: I1004 05:34:18.811512 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5gfbx"] Oct 04 05:34:18 crc kubenswrapper[4802]: W1004 05:34:18.820768 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22547876_a42a_47ed_a637_0aef4b42f4b4.slice/crio-df05a89cfcbd9e65b12fa2ceb09e86aa7f5f236baa86045854d69149d93681b7 WatchSource:0}: Error finding container df05a89cfcbd9e65b12fa2ceb09e86aa7f5f236baa86045854d69149d93681b7: Status 404 returned error can't find the container with id df05a89cfcbd9e65b12fa2ceb09e86aa7f5f236baa86045854d69149d93681b7 Oct 04 05:34:18 crc kubenswrapper[4802]: I1004 05:34:18.904135 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gfbx" event={"ID":"22547876-a42a-47ed-a637-0aef4b42f4b4","Type":"ContainerStarted","Data":"df05a89cfcbd9e65b12fa2ceb09e86aa7f5f236baa86045854d69149d93681b7"} Oct 04 05:34:19 crc kubenswrapper[4802]: I1004 05:34:19.915161 4802 generic.go:334] "Generic (PLEG): container finished" podID="22547876-a42a-47ed-a637-0aef4b42f4b4" containerID="fb7ca1eefdc48ce40defcf8e1882427fced095ad473f74817eb40f4020900ea1" exitCode=0 Oct 04 05:34:19 crc kubenswrapper[4802]: I1004 05:34:19.915356 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gfbx" event={"ID":"22547876-a42a-47ed-a637-0aef4b42f4b4","Type":"ContainerDied","Data":"fb7ca1eefdc48ce40defcf8e1882427fced095ad473f74817eb40f4020900ea1"} Oct 04 05:34:21 crc kubenswrapper[4802]: I1004 05:34:21.934135 4802 generic.go:334] "Generic (PLEG): container finished" podID="22547876-a42a-47ed-a637-0aef4b42f4b4" containerID="5ac779d2a6bb9821864d0b95d91bfa03fdf40cada34c1567a8a943907c1d45c6" exitCode=0 Oct 04 05:34:21 crc kubenswrapper[4802]: I1004 05:34:21.934247 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gfbx" event={"ID":"22547876-a42a-47ed-a637-0aef4b42f4b4","Type":"ContainerDied","Data":"5ac779d2a6bb9821864d0b95d91bfa03fdf40cada34c1567a8a943907c1d45c6"} Oct 04 05:34:22 crc kubenswrapper[4802]: I1004 05:34:22.662387 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:34:22 crc kubenswrapper[4802]: I1004 05:34:22.662783 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:34:22 crc kubenswrapper[4802]: I1004 05:34:22.942894 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gfbx" event={"ID":"22547876-a42a-47ed-a637-0aef4b42f4b4","Type":"ContainerStarted","Data":"a0938c928f7447b24269ff0356f75bcf1850461f3caf2b847ec76111fabd8b12"} Oct 04 05:34:22 crc kubenswrapper[4802]: I1004 05:34:22.963742 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5gfbx" podStartSLOduration=3.335886253 podStartE2EDuration="5.963719307s" podCreationTimestamp="2025-10-04 05:34:17 +0000 UTC" firstStartedPulling="2025-10-04 05:34:19.917503154 +0000 UTC m=+2902.325503779" lastFinishedPulling="2025-10-04 05:34:22.545336208 +0000 UTC m=+2904.953336833" observedRunningTime="2025-10-04 05:34:22.958233278 +0000 UTC m=+2905.366233913" watchObservedRunningTime="2025-10-04 05:34:22.963719307 +0000 UTC m=+2905.371719932" Oct 04 05:34:28 crc kubenswrapper[4802]: I1004 05:34:28.325625 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:28 crc kubenswrapper[4802]: I1004 05:34:28.326344 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:28 crc kubenswrapper[4802]: I1004 05:34:28.370767 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:29 crc kubenswrapper[4802]: I1004 05:34:29.041296 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:29 crc kubenswrapper[4802]: I1004 05:34:29.091411 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5gfbx"] Oct 04 05:34:31 crc kubenswrapper[4802]: I1004 05:34:31.010080 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5gfbx" podUID="22547876-a42a-47ed-a637-0aef4b42f4b4" containerName="registry-server" containerID="cri-o://a0938c928f7447b24269ff0356f75bcf1850461f3caf2b847ec76111fabd8b12" gracePeriod=2 Oct 04 05:34:31 crc kubenswrapper[4802]: I1004 05:34:31.470894 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:31 crc kubenswrapper[4802]: I1004 05:34:31.645705 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22547876-a42a-47ed-a637-0aef4b42f4b4-utilities\") pod \"22547876-a42a-47ed-a637-0aef4b42f4b4\" (UID: \"22547876-a42a-47ed-a637-0aef4b42f4b4\") " Oct 04 05:34:31 crc kubenswrapper[4802]: I1004 05:34:31.645840 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnxrl\" (UniqueName: \"kubernetes.io/projected/22547876-a42a-47ed-a637-0aef4b42f4b4-kube-api-access-nnxrl\") pod \"22547876-a42a-47ed-a637-0aef4b42f4b4\" (UID: \"22547876-a42a-47ed-a637-0aef4b42f4b4\") " Oct 04 05:34:31 crc kubenswrapper[4802]: I1004 05:34:31.645876 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22547876-a42a-47ed-a637-0aef4b42f4b4-catalog-content\") pod \"22547876-a42a-47ed-a637-0aef4b42f4b4\" (UID: \"22547876-a42a-47ed-a637-0aef4b42f4b4\") " Oct 04 05:34:31 crc kubenswrapper[4802]: I1004 05:34:31.646549 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22547876-a42a-47ed-a637-0aef4b42f4b4-utilities" (OuterVolumeSpecName: "utilities") pod "22547876-a42a-47ed-a637-0aef4b42f4b4" (UID: "22547876-a42a-47ed-a637-0aef4b42f4b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:34:31 crc kubenswrapper[4802]: I1004 05:34:31.664834 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22547876-a42a-47ed-a637-0aef4b42f4b4-kube-api-access-nnxrl" (OuterVolumeSpecName: "kube-api-access-nnxrl") pod "22547876-a42a-47ed-a637-0aef4b42f4b4" (UID: "22547876-a42a-47ed-a637-0aef4b42f4b4"). InnerVolumeSpecName "kube-api-access-nnxrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:34:31 crc kubenswrapper[4802]: I1004 05:34:31.715301 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22547876-a42a-47ed-a637-0aef4b42f4b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22547876-a42a-47ed-a637-0aef4b42f4b4" (UID: "22547876-a42a-47ed-a637-0aef4b42f4b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:34:31 crc kubenswrapper[4802]: I1004 05:34:31.747822 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22547876-a42a-47ed-a637-0aef4b42f4b4-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:34:31 crc kubenswrapper[4802]: I1004 05:34:31.747858 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22547876-a42a-47ed-a637-0aef4b42f4b4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:34:31 crc kubenswrapper[4802]: I1004 05:34:31.747871 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnxrl\" (UniqueName: \"kubernetes.io/projected/22547876-a42a-47ed-a637-0aef4b42f4b4-kube-api-access-nnxrl\") on node \"crc\" DevicePath \"\"" Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.022154 4802 generic.go:334] "Generic (PLEG): container finished" podID="22547876-a42a-47ed-a637-0aef4b42f4b4" containerID="a0938c928f7447b24269ff0356f75bcf1850461f3caf2b847ec76111fabd8b12" exitCode=0 Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.022210 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gfbx" event={"ID":"22547876-a42a-47ed-a637-0aef4b42f4b4","Type":"ContainerDied","Data":"a0938c928f7447b24269ff0356f75bcf1850461f3caf2b847ec76111fabd8b12"} Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.022235 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gfbx" event={"ID":"22547876-a42a-47ed-a637-0aef4b42f4b4","Type":"ContainerDied","Data":"df05a89cfcbd9e65b12fa2ceb09e86aa7f5f236baa86045854d69149d93681b7"} Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.022265 4802 scope.go:117] "RemoveContainer" containerID="a0938c928f7447b24269ff0356f75bcf1850461f3caf2b847ec76111fabd8b12" Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.022415 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5gfbx" Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.045848 4802 scope.go:117] "RemoveContainer" containerID="5ac779d2a6bb9821864d0b95d91bfa03fdf40cada34c1567a8a943907c1d45c6" Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.055885 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5gfbx"] Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.061918 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5gfbx"] Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.086372 4802 scope.go:117] "RemoveContainer" containerID="fb7ca1eefdc48ce40defcf8e1882427fced095ad473f74817eb40f4020900ea1" Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.107828 4802 scope.go:117] "RemoveContainer" containerID="a0938c928f7447b24269ff0356f75bcf1850461f3caf2b847ec76111fabd8b12" Oct 04 05:34:32 crc kubenswrapper[4802]: E1004 05:34:32.108291 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0938c928f7447b24269ff0356f75bcf1850461f3caf2b847ec76111fabd8b12\": container with ID starting with a0938c928f7447b24269ff0356f75bcf1850461f3caf2b847ec76111fabd8b12 not found: ID does not exist" containerID="a0938c928f7447b24269ff0356f75bcf1850461f3caf2b847ec76111fabd8b12" Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.108365 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0938c928f7447b24269ff0356f75bcf1850461f3caf2b847ec76111fabd8b12"} err="failed to get container status \"a0938c928f7447b24269ff0356f75bcf1850461f3caf2b847ec76111fabd8b12\": rpc error: code = NotFound desc = could not find container \"a0938c928f7447b24269ff0356f75bcf1850461f3caf2b847ec76111fabd8b12\": container with ID starting with a0938c928f7447b24269ff0356f75bcf1850461f3caf2b847ec76111fabd8b12 not found: ID does not exist" Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.108418 4802 scope.go:117] "RemoveContainer" containerID="5ac779d2a6bb9821864d0b95d91bfa03fdf40cada34c1567a8a943907c1d45c6" Oct 04 05:34:32 crc kubenswrapper[4802]: E1004 05:34:32.108973 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ac779d2a6bb9821864d0b95d91bfa03fdf40cada34c1567a8a943907c1d45c6\": container with ID starting with 5ac779d2a6bb9821864d0b95d91bfa03fdf40cada34c1567a8a943907c1d45c6 not found: ID does not exist" containerID="5ac779d2a6bb9821864d0b95d91bfa03fdf40cada34c1567a8a943907c1d45c6" Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.109034 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ac779d2a6bb9821864d0b95d91bfa03fdf40cada34c1567a8a943907c1d45c6"} err="failed to get container status \"5ac779d2a6bb9821864d0b95d91bfa03fdf40cada34c1567a8a943907c1d45c6\": rpc error: code = NotFound desc = could not find container \"5ac779d2a6bb9821864d0b95d91bfa03fdf40cada34c1567a8a943907c1d45c6\": container with ID starting with 5ac779d2a6bb9821864d0b95d91bfa03fdf40cada34c1567a8a943907c1d45c6 not found: ID does not exist" Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.109051 4802 scope.go:117] "RemoveContainer" containerID="fb7ca1eefdc48ce40defcf8e1882427fced095ad473f74817eb40f4020900ea1" Oct 04 05:34:32 crc kubenswrapper[4802]: E1004 05:34:32.110270 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb7ca1eefdc48ce40defcf8e1882427fced095ad473f74817eb40f4020900ea1\": container with ID starting with fb7ca1eefdc48ce40defcf8e1882427fced095ad473f74817eb40f4020900ea1 not found: ID does not exist" containerID="fb7ca1eefdc48ce40defcf8e1882427fced095ad473f74817eb40f4020900ea1" Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.110317 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7ca1eefdc48ce40defcf8e1882427fced095ad473f74817eb40f4020900ea1"} err="failed to get container status \"fb7ca1eefdc48ce40defcf8e1882427fced095ad473f74817eb40f4020900ea1\": rpc error: code = NotFound desc = could not find container \"fb7ca1eefdc48ce40defcf8e1882427fced095ad473f74817eb40f4020900ea1\": container with ID starting with fb7ca1eefdc48ce40defcf8e1882427fced095ad473f74817eb40f4020900ea1 not found: ID does not exist" Oct 04 05:34:32 crc kubenswrapper[4802]: I1004 05:34:32.370666 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22547876-a42a-47ed-a637-0aef4b42f4b4" path="/var/lib/kubelet/pods/22547876-a42a-47ed-a637-0aef4b42f4b4/volumes" Oct 04 05:34:52 crc kubenswrapper[4802]: I1004 05:34:52.662278 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:34:52 crc kubenswrapper[4802]: I1004 05:34:52.662951 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:34:52 crc kubenswrapper[4802]: I1004 05:34:52.663059 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 05:34:52 crc kubenswrapper[4802]: I1004 05:34:52.663883 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c402b9be756f7c07beeb9015d7a8cb4d938d61c476b98861866855d2ce212e4e"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 05:34:52 crc kubenswrapper[4802]: I1004 05:34:52.663949 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://c402b9be756f7c07beeb9015d7a8cb4d938d61c476b98861866855d2ce212e4e" gracePeriod=600 Oct 04 05:34:53 crc kubenswrapper[4802]: I1004 05:34:53.210805 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="c402b9be756f7c07beeb9015d7a8cb4d938d61c476b98861866855d2ce212e4e" exitCode=0 Oct 04 05:34:53 crc kubenswrapper[4802]: I1004 05:34:53.210905 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"c402b9be756f7c07beeb9015d7a8cb4d938d61c476b98861866855d2ce212e4e"} Oct 04 05:34:53 crc kubenswrapper[4802]: I1004 05:34:53.211138 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76"} Oct 04 05:34:53 crc kubenswrapper[4802]: I1004 05:34:53.211158 4802 scope.go:117] "RemoveContainer" containerID="34f3c24a72e87aed6a1a1e67fa617b9e46e717f253b8e575024be852db57d96b" Oct 04 05:36:52 crc kubenswrapper[4802]: I1004 05:36:52.663181 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:36:52 crc kubenswrapper[4802]: I1004 05:36:52.663750 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:37:22 crc kubenswrapper[4802]: I1004 05:37:22.662838 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:37:22 crc kubenswrapper[4802]: I1004 05:37:22.663451 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:37:52 crc kubenswrapper[4802]: I1004 05:37:52.662759 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:37:52 crc kubenswrapper[4802]: I1004 05:37:52.663371 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:37:52 crc kubenswrapper[4802]: I1004 05:37:52.663429 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 05:37:52 crc kubenswrapper[4802]: I1004 05:37:52.664235 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 05:37:52 crc kubenswrapper[4802]: I1004 05:37:52.664289 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" gracePeriod=600 Oct 04 05:37:52 crc kubenswrapper[4802]: E1004 05:37:52.787205 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:37:53 crc kubenswrapper[4802]: I1004 05:37:53.679549 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" exitCode=0 Oct 04 05:37:53 crc kubenswrapper[4802]: I1004 05:37:53.679633 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76"} Oct 04 05:37:53 crc kubenswrapper[4802]: I1004 05:37:53.679728 4802 scope.go:117] "RemoveContainer" containerID="c402b9be756f7c07beeb9015d7a8cb4d938d61c476b98861866855d2ce212e4e" Oct 04 05:37:53 crc kubenswrapper[4802]: I1004 05:37:53.682605 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:37:53 crc kubenswrapper[4802]: E1004 05:37:53.683514 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:38:05 crc kubenswrapper[4802]: I1004 05:38:05.778128 4802 generic.go:334] "Generic (PLEG): container finished" podID="365ae152-a4d6-4ecd-b8c6-ea3d110ebcde" containerID="31c2a8b343fcea66b36f28c619d6d87f182f8d8dab730f34cb91db1c8489c7f5" exitCode=0 Oct 04 05:38:05 crc kubenswrapper[4802]: I1004 05:38:05.778215 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" event={"ID":"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde","Type":"ContainerDied","Data":"31c2a8b343fcea66b36f28c619d6d87f182f8d8dab730f34cb91db1c8489c7f5"} Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.152998 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.263778 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-libvirt-combined-ca-bundle\") pod \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.264147 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-ssh-key\") pod \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.264249 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-inventory\") pod \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.264297 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-ceph\") pod \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.264338 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlncp\" (UniqueName: \"kubernetes.io/projected/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-kube-api-access-jlncp\") pod \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.264410 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-libvirt-secret-0\") pod \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\" (UID: \"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde\") " Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.276042 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-ceph" (OuterVolumeSpecName: "ceph") pod "365ae152-a4d6-4ecd-b8c6-ea3d110ebcde" (UID: "365ae152-a4d6-4ecd-b8c6-ea3d110ebcde"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.276096 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-kube-api-access-jlncp" (OuterVolumeSpecName: "kube-api-access-jlncp") pod "365ae152-a4d6-4ecd-b8c6-ea3d110ebcde" (UID: "365ae152-a4d6-4ecd-b8c6-ea3d110ebcde"). InnerVolumeSpecName "kube-api-access-jlncp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.276115 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "365ae152-a4d6-4ecd-b8c6-ea3d110ebcde" (UID: "365ae152-a4d6-4ecd-b8c6-ea3d110ebcde"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.296029 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "365ae152-a4d6-4ecd-b8c6-ea3d110ebcde" (UID: "365ae152-a4d6-4ecd-b8c6-ea3d110ebcde"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.296458 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "365ae152-a4d6-4ecd-b8c6-ea3d110ebcde" (UID: "365ae152-a4d6-4ecd-b8c6-ea3d110ebcde"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.296631 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-inventory" (OuterVolumeSpecName: "inventory") pod "365ae152-a4d6-4ecd-b8c6-ea3d110ebcde" (UID: "365ae152-a4d6-4ecd-b8c6-ea3d110ebcde"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.366115 4802 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.366149 4802 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.366160 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.366169 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.366179 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.366188 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlncp\" (UniqueName: \"kubernetes.io/projected/365ae152-a4d6-4ecd-b8c6-ea3d110ebcde-kube-api-access-jlncp\") on node \"crc\" DevicePath \"\"" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.797167 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" event={"ID":"365ae152-a4d6-4ecd-b8c6-ea3d110ebcde","Type":"ContainerDied","Data":"51d78f848e5f8ac81621c7e51e097063818deb4243c5e695931445194da1ab2a"} Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.797200 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51d78f848e5f8ac81621c7e51e097063818deb4243c5e695931445194da1ab2a" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.797252 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42d2p" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.894396 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm"] Oct 04 05:38:07 crc kubenswrapper[4802]: E1004 05:38:07.894828 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22547876-a42a-47ed-a637-0aef4b42f4b4" containerName="extract-content" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.894849 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="22547876-a42a-47ed-a637-0aef4b42f4b4" containerName="extract-content" Oct 04 05:38:07 crc kubenswrapper[4802]: E1004 05:38:07.894870 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365ae152-a4d6-4ecd-b8c6-ea3d110ebcde" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.894879 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="365ae152-a4d6-4ecd-b8c6-ea3d110ebcde" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 04 05:38:07 crc kubenswrapper[4802]: E1004 05:38:07.894896 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22547876-a42a-47ed-a637-0aef4b42f4b4" containerName="registry-server" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.894903 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="22547876-a42a-47ed-a637-0aef4b42f4b4" containerName="registry-server" Oct 04 05:38:07 crc kubenswrapper[4802]: E1004 05:38:07.894945 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22547876-a42a-47ed-a637-0aef4b42f4b4" containerName="extract-utilities" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.894953 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="22547876-a42a-47ed-a637-0aef4b42f4b4" containerName="extract-utilities" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.895170 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="365ae152-a4d6-4ecd-b8c6-ea3d110ebcde" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.895204 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="22547876-a42a-47ed-a637-0aef4b42f4b4" containerName="registry-server" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.896127 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.899735 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.899854 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.899935 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.900109 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jll6" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.900126 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.900365 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.900456 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.900691 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.902175 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 04 05:38:07 crc kubenswrapper[4802]: I1004 05:38:07.911856 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm"] Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.078186 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5c113e22-c317-4882-9403-6bdc543e9775-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.078242 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.078313 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.078357 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.078380 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.078427 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.078449 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.078480 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/5c113e22-c317-4882-9403-6bdc543e9775-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.078512 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.078541 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dj9n\" (UniqueName: \"kubernetes.io/projected/5c113e22-c317-4882-9403-6bdc543e9775-kube-api-access-8dj9n\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.078573 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.180251 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.180311 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dj9n\" (UniqueName: \"kubernetes.io/projected/5c113e22-c317-4882-9403-6bdc543e9775-kube-api-access-8dj9n\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.180355 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.180419 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5c113e22-c317-4882-9403-6bdc543e9775-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.180448 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.180514 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.180575 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.180604 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.180677 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.180702 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.180745 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/5c113e22-c317-4882-9403-6bdc543e9775-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.181767 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/5c113e22-c317-4882-9403-6bdc543e9775-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.182387 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5c113e22-c317-4882-9403-6bdc543e9775-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.185147 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.185259 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.185411 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.186127 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.186323 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.186605 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.188940 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.194228 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.200050 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dj9n\" (UniqueName: \"kubernetes.io/projected/5c113e22-c317-4882-9403-6bdc543e9775-kube-api-access-8dj9n\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.215622 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.366442 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:38:08 crc kubenswrapper[4802]: E1004 05:38:08.366843 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.764466 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm"] Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.764487 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 05:38:08 crc kubenswrapper[4802]: I1004 05:38:08.808883 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" event={"ID":"5c113e22-c317-4882-9403-6bdc543e9775","Type":"ContainerStarted","Data":"9631cb5b77476e2ab15ab148f7d6dc49c1be7651eceb0316395e6c01f2d13989"} Oct 04 05:38:09 crc kubenswrapper[4802]: I1004 05:38:09.820383 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" event={"ID":"5c113e22-c317-4882-9403-6bdc543e9775","Type":"ContainerStarted","Data":"e5c5ed65630e0bc4d1659eec5d60ef12d2e0309c838055ac32d5fb300d804079"} Oct 04 05:38:09 crc kubenswrapper[4802]: I1004 05:38:09.838916 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" podStartSLOduration=2.141368188 podStartE2EDuration="2.838892756s" podCreationTimestamp="2025-10-04 05:38:07 +0000 UTC" firstStartedPulling="2025-10-04 05:38:08.764203667 +0000 UTC m=+3131.172204302" lastFinishedPulling="2025-10-04 05:38:09.461728225 +0000 UTC m=+3131.869728870" observedRunningTime="2025-10-04 05:38:09.836278081 +0000 UTC m=+3132.244278706" watchObservedRunningTime="2025-10-04 05:38:09.838892756 +0000 UTC m=+3132.246893381" Oct 04 05:38:20 crc kubenswrapper[4802]: I1004 05:38:20.360674 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:38:20 crc kubenswrapper[4802]: E1004 05:38:20.361584 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:38:31 crc kubenswrapper[4802]: I1004 05:38:31.360368 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:38:31 crc kubenswrapper[4802]: E1004 05:38:31.361250 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:38:42 crc kubenswrapper[4802]: I1004 05:38:42.361206 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:38:42 crc kubenswrapper[4802]: E1004 05:38:42.362671 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:38:57 crc kubenswrapper[4802]: I1004 05:38:57.359582 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:38:57 crc kubenswrapper[4802]: E1004 05:38:57.360420 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:39:12 crc kubenswrapper[4802]: I1004 05:39:12.360970 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:39:12 crc kubenswrapper[4802]: E1004 05:39:12.361841 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:39:26 crc kubenswrapper[4802]: I1004 05:39:26.359875 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:39:26 crc kubenswrapper[4802]: E1004 05:39:26.360756 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:39:40 crc kubenswrapper[4802]: I1004 05:39:40.359883 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:39:40 crc kubenswrapper[4802]: E1004 05:39:40.360652 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:39:53 crc kubenswrapper[4802]: I1004 05:39:53.360004 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:39:53 crc kubenswrapper[4802]: E1004 05:39:53.360703 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:40:07 crc kubenswrapper[4802]: I1004 05:40:07.359937 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:40:07 crc kubenswrapper[4802]: E1004 05:40:07.360697 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:40:20 crc kubenswrapper[4802]: I1004 05:40:20.360307 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:40:20 crc kubenswrapper[4802]: E1004 05:40:20.361021 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:40:32 crc kubenswrapper[4802]: I1004 05:40:32.360945 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:40:32 crc kubenswrapper[4802]: E1004 05:40:32.362431 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:40:44 crc kubenswrapper[4802]: I1004 05:40:44.359792 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:40:44 crc kubenswrapper[4802]: E1004 05:40:44.360662 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:40:58 crc kubenswrapper[4802]: I1004 05:40:58.359965 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:40:58 crc kubenswrapper[4802]: E1004 05:40:58.361044 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:41:06 crc kubenswrapper[4802]: I1004 05:41:06.932259 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pvz5c"] Oct 04 05:41:06 crc kubenswrapper[4802]: I1004 05:41:06.935154 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:06 crc kubenswrapper[4802]: I1004 05:41:06.947198 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69770e0b-ba06-45c1-a8a0-e314940b21cf-utilities\") pod \"redhat-operators-pvz5c\" (UID: \"69770e0b-ba06-45c1-a8a0-e314940b21cf\") " pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:06 crc kubenswrapper[4802]: I1004 05:41:06.947388 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69770e0b-ba06-45c1-a8a0-e314940b21cf-catalog-content\") pod \"redhat-operators-pvz5c\" (UID: \"69770e0b-ba06-45c1-a8a0-e314940b21cf\") " pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:06 crc kubenswrapper[4802]: I1004 05:41:06.947466 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmxqp\" (UniqueName: \"kubernetes.io/projected/69770e0b-ba06-45c1-a8a0-e314940b21cf-kube-api-access-vmxqp\") pod \"redhat-operators-pvz5c\" (UID: \"69770e0b-ba06-45c1-a8a0-e314940b21cf\") " pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:06 crc kubenswrapper[4802]: I1004 05:41:06.964360 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvz5c"] Oct 04 05:41:07 crc kubenswrapper[4802]: I1004 05:41:07.048992 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69770e0b-ba06-45c1-a8a0-e314940b21cf-catalog-content\") pod \"redhat-operators-pvz5c\" (UID: \"69770e0b-ba06-45c1-a8a0-e314940b21cf\") " pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:07 crc kubenswrapper[4802]: I1004 05:41:07.049062 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmxqp\" (UniqueName: \"kubernetes.io/projected/69770e0b-ba06-45c1-a8a0-e314940b21cf-kube-api-access-vmxqp\") pod \"redhat-operators-pvz5c\" (UID: \"69770e0b-ba06-45c1-a8a0-e314940b21cf\") " pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:07 crc kubenswrapper[4802]: I1004 05:41:07.049550 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69770e0b-ba06-45c1-a8a0-e314940b21cf-utilities\") pod \"redhat-operators-pvz5c\" (UID: \"69770e0b-ba06-45c1-a8a0-e314940b21cf\") " pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:07 crc kubenswrapper[4802]: I1004 05:41:07.050013 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69770e0b-ba06-45c1-a8a0-e314940b21cf-catalog-content\") pod \"redhat-operators-pvz5c\" (UID: \"69770e0b-ba06-45c1-a8a0-e314940b21cf\") " pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:07 crc kubenswrapper[4802]: I1004 05:41:07.050143 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69770e0b-ba06-45c1-a8a0-e314940b21cf-utilities\") pod \"redhat-operators-pvz5c\" (UID: \"69770e0b-ba06-45c1-a8a0-e314940b21cf\") " pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:07 crc kubenswrapper[4802]: I1004 05:41:07.079117 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmxqp\" (UniqueName: \"kubernetes.io/projected/69770e0b-ba06-45c1-a8a0-e314940b21cf-kube-api-access-vmxqp\") pod \"redhat-operators-pvz5c\" (UID: \"69770e0b-ba06-45c1-a8a0-e314940b21cf\") " pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:07 crc kubenswrapper[4802]: I1004 05:41:07.260260 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:07 crc kubenswrapper[4802]: I1004 05:41:07.689370 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvz5c"] Oct 04 05:41:08 crc kubenswrapper[4802]: I1004 05:41:08.333237 4802 generic.go:334] "Generic (PLEG): container finished" podID="69770e0b-ba06-45c1-a8a0-e314940b21cf" containerID="7258db4aeb5934cbb6a6c2db16079d7d5422c38af3d3de3465d749c5076cdb8f" exitCode=0 Oct 04 05:41:08 crc kubenswrapper[4802]: I1004 05:41:08.333293 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvz5c" event={"ID":"69770e0b-ba06-45c1-a8a0-e314940b21cf","Type":"ContainerDied","Data":"7258db4aeb5934cbb6a6c2db16079d7d5422c38af3d3de3465d749c5076cdb8f"} Oct 04 05:41:08 crc kubenswrapper[4802]: I1004 05:41:08.333326 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvz5c" event={"ID":"69770e0b-ba06-45c1-a8a0-e314940b21cf","Type":"ContainerStarted","Data":"ef318cab5db6584a803736d64ed502cbdf0d0a3f1c4ea27e630f773856eaee4d"} Oct 04 05:41:10 crc kubenswrapper[4802]: I1004 05:41:10.353464 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvz5c" event={"ID":"69770e0b-ba06-45c1-a8a0-e314940b21cf","Type":"ContainerStarted","Data":"d8222c62985745c6d2811b54de1896ffd6c9fa17654af02cc68bc4bde912f120"} Oct 04 05:41:11 crc kubenswrapper[4802]: I1004 05:41:11.359937 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:41:11 crc kubenswrapper[4802]: E1004 05:41:11.361416 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:41:11 crc kubenswrapper[4802]: I1004 05:41:11.367533 4802 generic.go:334] "Generic (PLEG): container finished" podID="69770e0b-ba06-45c1-a8a0-e314940b21cf" containerID="d8222c62985745c6d2811b54de1896ffd6c9fa17654af02cc68bc4bde912f120" exitCode=0 Oct 04 05:41:11 crc kubenswrapper[4802]: I1004 05:41:11.367581 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvz5c" event={"ID":"69770e0b-ba06-45c1-a8a0-e314940b21cf","Type":"ContainerDied","Data":"d8222c62985745c6d2811b54de1896ffd6c9fa17654af02cc68bc4bde912f120"} Oct 04 05:41:12 crc kubenswrapper[4802]: I1004 05:41:12.378878 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvz5c" event={"ID":"69770e0b-ba06-45c1-a8a0-e314940b21cf","Type":"ContainerStarted","Data":"d29483e2f7ea1826a0aea018f4c77f53fe3ede65a9790a8371e3f4175c6419de"} Oct 04 05:41:12 crc kubenswrapper[4802]: I1004 05:41:12.401472 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pvz5c" podStartSLOduration=2.757213696 podStartE2EDuration="6.401451012s" podCreationTimestamp="2025-10-04 05:41:06 +0000 UTC" firstStartedPulling="2025-10-04 05:41:08.337573835 +0000 UTC m=+3310.745574460" lastFinishedPulling="2025-10-04 05:41:11.981811111 +0000 UTC m=+3314.389811776" observedRunningTime="2025-10-04 05:41:12.394297712 +0000 UTC m=+3314.802298357" watchObservedRunningTime="2025-10-04 05:41:12.401451012 +0000 UTC m=+3314.809451647" Oct 04 05:41:17 crc kubenswrapper[4802]: I1004 05:41:17.261062 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:17 crc kubenswrapper[4802]: I1004 05:41:17.261614 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:17 crc kubenswrapper[4802]: I1004 05:41:17.312581 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:17 crc kubenswrapper[4802]: I1004 05:41:17.458426 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:17 crc kubenswrapper[4802]: I1004 05:41:17.551090 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvz5c"] Oct 04 05:41:19 crc kubenswrapper[4802]: I1004 05:41:19.429008 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pvz5c" podUID="69770e0b-ba06-45c1-a8a0-e314940b21cf" containerName="registry-server" containerID="cri-o://d29483e2f7ea1826a0aea018f4c77f53fe3ede65a9790a8371e3f4175c6419de" gracePeriod=2 Oct 04 05:41:19 crc kubenswrapper[4802]: I1004 05:41:19.858456 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.016588 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69770e0b-ba06-45c1-a8a0-e314940b21cf-utilities\") pod \"69770e0b-ba06-45c1-a8a0-e314940b21cf\" (UID: \"69770e0b-ba06-45c1-a8a0-e314940b21cf\") " Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.016693 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmxqp\" (UniqueName: \"kubernetes.io/projected/69770e0b-ba06-45c1-a8a0-e314940b21cf-kube-api-access-vmxqp\") pod \"69770e0b-ba06-45c1-a8a0-e314940b21cf\" (UID: \"69770e0b-ba06-45c1-a8a0-e314940b21cf\") " Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.016745 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69770e0b-ba06-45c1-a8a0-e314940b21cf-catalog-content\") pod \"69770e0b-ba06-45c1-a8a0-e314940b21cf\" (UID: \"69770e0b-ba06-45c1-a8a0-e314940b21cf\") " Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.018343 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69770e0b-ba06-45c1-a8a0-e314940b21cf-utilities" (OuterVolumeSpecName: "utilities") pod "69770e0b-ba06-45c1-a8a0-e314940b21cf" (UID: "69770e0b-ba06-45c1-a8a0-e314940b21cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.022084 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69770e0b-ba06-45c1-a8a0-e314940b21cf-kube-api-access-vmxqp" (OuterVolumeSpecName: "kube-api-access-vmxqp") pod "69770e0b-ba06-45c1-a8a0-e314940b21cf" (UID: "69770e0b-ba06-45c1-a8a0-e314940b21cf"). InnerVolumeSpecName "kube-api-access-vmxqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.119081 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69770e0b-ba06-45c1-a8a0-e314940b21cf-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.119126 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmxqp\" (UniqueName: \"kubernetes.io/projected/69770e0b-ba06-45c1-a8a0-e314940b21cf-kube-api-access-vmxqp\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.134705 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69770e0b-ba06-45c1-a8a0-e314940b21cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69770e0b-ba06-45c1-a8a0-e314940b21cf" (UID: "69770e0b-ba06-45c1-a8a0-e314940b21cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.220856 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69770e0b-ba06-45c1-a8a0-e314940b21cf-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.438414 4802 generic.go:334] "Generic (PLEG): container finished" podID="69770e0b-ba06-45c1-a8a0-e314940b21cf" containerID="d29483e2f7ea1826a0aea018f4c77f53fe3ede65a9790a8371e3f4175c6419de" exitCode=0 Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.438461 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvz5c" event={"ID":"69770e0b-ba06-45c1-a8a0-e314940b21cf","Type":"ContainerDied","Data":"d29483e2f7ea1826a0aea018f4c77f53fe3ede65a9790a8371e3f4175c6419de"} Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.438492 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvz5c" event={"ID":"69770e0b-ba06-45c1-a8a0-e314940b21cf","Type":"ContainerDied","Data":"ef318cab5db6584a803736d64ed502cbdf0d0a3f1c4ea27e630f773856eaee4d"} Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.438510 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvz5c" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.438561 4802 scope.go:117] "RemoveContainer" containerID="d29483e2f7ea1826a0aea018f4c77f53fe3ede65a9790a8371e3f4175c6419de" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.469206 4802 scope.go:117] "RemoveContainer" containerID="d8222c62985745c6d2811b54de1896ffd6c9fa17654af02cc68bc4bde912f120" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.473085 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvz5c"] Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.480079 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pvz5c"] Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.488252 4802 scope.go:117] "RemoveContainer" containerID="7258db4aeb5934cbb6a6c2db16079d7d5422c38af3d3de3465d749c5076cdb8f" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.550364 4802 scope.go:117] "RemoveContainer" containerID="d29483e2f7ea1826a0aea018f4c77f53fe3ede65a9790a8371e3f4175c6419de" Oct 04 05:41:20 crc kubenswrapper[4802]: E1004 05:41:20.551103 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d29483e2f7ea1826a0aea018f4c77f53fe3ede65a9790a8371e3f4175c6419de\": container with ID starting with d29483e2f7ea1826a0aea018f4c77f53fe3ede65a9790a8371e3f4175c6419de not found: ID does not exist" containerID="d29483e2f7ea1826a0aea018f4c77f53fe3ede65a9790a8371e3f4175c6419de" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.551144 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d29483e2f7ea1826a0aea018f4c77f53fe3ede65a9790a8371e3f4175c6419de"} err="failed to get container status \"d29483e2f7ea1826a0aea018f4c77f53fe3ede65a9790a8371e3f4175c6419de\": rpc error: code = NotFound desc = could not find container \"d29483e2f7ea1826a0aea018f4c77f53fe3ede65a9790a8371e3f4175c6419de\": container with ID starting with d29483e2f7ea1826a0aea018f4c77f53fe3ede65a9790a8371e3f4175c6419de not found: ID does not exist" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.551171 4802 scope.go:117] "RemoveContainer" containerID="d8222c62985745c6d2811b54de1896ffd6c9fa17654af02cc68bc4bde912f120" Oct 04 05:41:20 crc kubenswrapper[4802]: E1004 05:41:20.551460 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8222c62985745c6d2811b54de1896ffd6c9fa17654af02cc68bc4bde912f120\": container with ID starting with d8222c62985745c6d2811b54de1896ffd6c9fa17654af02cc68bc4bde912f120 not found: ID does not exist" containerID="d8222c62985745c6d2811b54de1896ffd6c9fa17654af02cc68bc4bde912f120" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.551496 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8222c62985745c6d2811b54de1896ffd6c9fa17654af02cc68bc4bde912f120"} err="failed to get container status \"d8222c62985745c6d2811b54de1896ffd6c9fa17654af02cc68bc4bde912f120\": rpc error: code = NotFound desc = could not find container \"d8222c62985745c6d2811b54de1896ffd6c9fa17654af02cc68bc4bde912f120\": container with ID starting with d8222c62985745c6d2811b54de1896ffd6c9fa17654af02cc68bc4bde912f120 not found: ID does not exist" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.551521 4802 scope.go:117] "RemoveContainer" containerID="7258db4aeb5934cbb6a6c2db16079d7d5422c38af3d3de3465d749c5076cdb8f" Oct 04 05:41:20 crc kubenswrapper[4802]: E1004 05:41:20.551811 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7258db4aeb5934cbb6a6c2db16079d7d5422c38af3d3de3465d749c5076cdb8f\": container with ID starting with 7258db4aeb5934cbb6a6c2db16079d7d5422c38af3d3de3465d749c5076cdb8f not found: ID does not exist" containerID="7258db4aeb5934cbb6a6c2db16079d7d5422c38af3d3de3465d749c5076cdb8f" Oct 04 05:41:20 crc kubenswrapper[4802]: I1004 05:41:20.551833 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7258db4aeb5934cbb6a6c2db16079d7d5422c38af3d3de3465d749c5076cdb8f"} err="failed to get container status \"7258db4aeb5934cbb6a6c2db16079d7d5422c38af3d3de3465d749c5076cdb8f\": rpc error: code = NotFound desc = could not find container \"7258db4aeb5934cbb6a6c2db16079d7d5422c38af3d3de3465d749c5076cdb8f\": container with ID starting with 7258db4aeb5934cbb6a6c2db16079d7d5422c38af3d3de3465d749c5076cdb8f not found: ID does not exist" Oct 04 05:41:22 crc kubenswrapper[4802]: I1004 05:41:22.371742 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69770e0b-ba06-45c1-a8a0-e314940b21cf" path="/var/lib/kubelet/pods/69770e0b-ba06-45c1-a8a0-e314940b21cf/volumes" Oct 04 05:41:24 crc kubenswrapper[4802]: I1004 05:41:24.361017 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:41:24 crc kubenswrapper[4802]: E1004 05:41:24.361525 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:41:36 crc kubenswrapper[4802]: I1004 05:41:36.360383 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:41:36 crc kubenswrapper[4802]: E1004 05:41:36.361212 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:41:38 crc kubenswrapper[4802]: I1004 05:41:38.609877 4802 generic.go:334] "Generic (PLEG): container finished" podID="5c113e22-c317-4882-9403-6bdc543e9775" containerID="e5c5ed65630e0bc4d1659eec5d60ef12d2e0309c838055ac32d5fb300d804079" exitCode=0 Oct 04 05:41:38 crc kubenswrapper[4802]: I1004 05:41:38.610022 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" event={"ID":"5c113e22-c317-4882-9403-6bdc543e9775","Type":"ContainerDied","Data":"e5c5ed65630e0bc4d1659eec5d60ef12d2e0309c838055ac32d5fb300d804079"} Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.004389 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.050916 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-migration-ssh-key-1\") pod \"5c113e22-c317-4882-9403-6bdc543e9775\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.050994 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-migration-ssh-key-0\") pod \"5c113e22-c317-4882-9403-6bdc543e9775\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.051014 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-cell1-compute-config-1\") pod \"5c113e22-c317-4882-9403-6bdc543e9775\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.051105 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-custom-ceph-combined-ca-bundle\") pod \"5c113e22-c317-4882-9403-6bdc543e9775\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.051191 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-inventory\") pod \"5c113e22-c317-4882-9403-6bdc543e9775\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.051210 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-cell1-compute-config-0\") pod \"5c113e22-c317-4882-9403-6bdc543e9775\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.051226 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dj9n\" (UniqueName: \"kubernetes.io/projected/5c113e22-c317-4882-9403-6bdc543e9775-kube-api-access-8dj9n\") pod \"5c113e22-c317-4882-9403-6bdc543e9775\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.051279 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5c113e22-c317-4882-9403-6bdc543e9775-nova-extra-config-0\") pod \"5c113e22-c317-4882-9403-6bdc543e9775\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.051301 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-ceph\") pod \"5c113e22-c317-4882-9403-6bdc543e9775\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.051325 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-ssh-key\") pod \"5c113e22-c317-4882-9403-6bdc543e9775\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.051340 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/5c113e22-c317-4882-9403-6bdc543e9775-ceph-nova-0\") pod \"5c113e22-c317-4882-9403-6bdc543e9775\" (UID: \"5c113e22-c317-4882-9403-6bdc543e9775\") " Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.056430 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c113e22-c317-4882-9403-6bdc543e9775-kube-api-access-8dj9n" (OuterVolumeSpecName: "kube-api-access-8dj9n") pod "5c113e22-c317-4882-9403-6bdc543e9775" (UID: "5c113e22-c317-4882-9403-6bdc543e9775"). InnerVolumeSpecName "kube-api-access-8dj9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.056494 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-ceph" (OuterVolumeSpecName: "ceph") pod "5c113e22-c317-4882-9403-6bdc543e9775" (UID: "5c113e22-c317-4882-9403-6bdc543e9775"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.057115 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "5c113e22-c317-4882-9403-6bdc543e9775" (UID: "5c113e22-c317-4882-9403-6bdc543e9775"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.078461 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c113e22-c317-4882-9403-6bdc543e9775-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "5c113e22-c317-4882-9403-6bdc543e9775" (UID: "5c113e22-c317-4882-9403-6bdc543e9775"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.079914 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "5c113e22-c317-4882-9403-6bdc543e9775" (UID: "5c113e22-c317-4882-9403-6bdc543e9775"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.083235 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "5c113e22-c317-4882-9403-6bdc543e9775" (UID: "5c113e22-c317-4882-9403-6bdc543e9775"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.088518 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5c113e22-c317-4882-9403-6bdc543e9775" (UID: "5c113e22-c317-4882-9403-6bdc543e9775"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.094464 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "5c113e22-c317-4882-9403-6bdc543e9775" (UID: "5c113e22-c317-4882-9403-6bdc543e9775"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.097811 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-inventory" (OuterVolumeSpecName: "inventory") pod "5c113e22-c317-4882-9403-6bdc543e9775" (UID: "5c113e22-c317-4882-9403-6bdc543e9775"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.100120 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c113e22-c317-4882-9403-6bdc543e9775-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "5c113e22-c317-4882-9403-6bdc543e9775" (UID: "5c113e22-c317-4882-9403-6bdc543e9775"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.100408 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "5c113e22-c317-4882-9403-6bdc543e9775" (UID: "5c113e22-c317-4882-9403-6bdc543e9775"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.153063 4802 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.153097 4802 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-inventory\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.153109 4802 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.153118 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dj9n\" (UniqueName: \"kubernetes.io/projected/5c113e22-c317-4882-9403-6bdc543e9775-kube-api-access-8dj9n\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.153127 4802 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5c113e22-c317-4882-9403-6bdc543e9775-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.153135 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.153144 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.153152 4802 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/5c113e22-c317-4882-9403-6bdc543e9775-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.153160 4802 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.153170 4802 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.153178 4802 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5c113e22-c317-4882-9403-6bdc543e9775-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.636112 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" event={"ID":"5c113e22-c317-4882-9403-6bdc543e9775","Type":"ContainerDied","Data":"9631cb5b77476e2ab15ab148f7d6dc49c1be7651eceb0316395e6c01f2d13989"} Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.636430 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9631cb5b77476e2ab15ab148f7d6dc49c1be7651eceb0316395e6c01f2d13989" Oct 04 05:41:40 crc kubenswrapper[4802]: I1004 05:41:40.636288 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm" Oct 04 05:41:49 crc kubenswrapper[4802]: I1004 05:41:49.359480 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:41:49 crc kubenswrapper[4802]: E1004 05:41:49.361627 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.525240 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 04 05:41:54 crc kubenswrapper[4802]: E1004 05:41:54.526072 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c113e22-c317-4882-9403-6bdc543e9775" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.526086 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c113e22-c317-4882-9403-6bdc543e9775" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Oct 04 05:41:54 crc kubenswrapper[4802]: E1004 05:41:54.526105 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69770e0b-ba06-45c1-a8a0-e314940b21cf" containerName="extract-utilities" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.526111 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="69770e0b-ba06-45c1-a8a0-e314940b21cf" containerName="extract-utilities" Oct 04 05:41:54 crc kubenswrapper[4802]: E1004 05:41:54.526122 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69770e0b-ba06-45c1-a8a0-e314940b21cf" containerName="extract-content" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.526128 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="69770e0b-ba06-45c1-a8a0-e314940b21cf" containerName="extract-content" Oct 04 05:41:54 crc kubenswrapper[4802]: E1004 05:41:54.526151 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69770e0b-ba06-45c1-a8a0-e314940b21cf" containerName="registry-server" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.526169 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="69770e0b-ba06-45c1-a8a0-e314940b21cf" containerName="registry-server" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.526335 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="69770e0b-ba06-45c1-a8a0-e314940b21cf" containerName="registry-server" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.526351 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c113e22-c317-4882-9403-6bdc543e9775" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.527318 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.529715 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.529876 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.544831 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.556230 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.557718 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.564525 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.582544 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.629971 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630020 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf0554fa-2bf2-45d4-a620-7445764b693d-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630098 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b5d6eca-76be-4473-8c62-b92cd50ba646-scripts\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630190 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-lib-modules\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630215 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630280 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-dev\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630361 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-run\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630437 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7b5d6eca-76be-4473-8c62-b92cd50ba646-ceph\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630460 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5d6eca-76be-4473-8c62-b92cd50ba646-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630526 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d57zw\" (UniqueName: \"kubernetes.io/projected/7b5d6eca-76be-4473-8c62-b92cd50ba646-kube-api-access-d57zw\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630593 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-etc-nvme\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630689 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630717 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf0554fa-2bf2-45d4-a620-7445764b693d-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630771 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-sys\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630839 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630873 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630931 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cf0554fa-2bf2-45d4-a620-7445764b693d-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.630996 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-run\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.631054 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.631160 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b5d6eca-76be-4473-8c62-b92cd50ba646-config-data-custom\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.631189 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0554fa-2bf2-45d4-a620-7445764b693d-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.631275 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.631304 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf0554fa-2bf2-45d4-a620-7445764b693d-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.631379 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.631489 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.631560 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-sys\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.631595 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.631917 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8gsh\" (UniqueName: \"kubernetes.io/projected/cf0554fa-2bf2-45d4-a620-7445764b693d-kube-api-access-v8gsh\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.632010 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.632043 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-dev\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.632116 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.632172 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5d6eca-76be-4473-8c62-b92cd50ba646-config-data\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.733563 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-dev\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.733626 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-run\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.733660 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7b5d6eca-76be-4473-8c62-b92cd50ba646-ceph\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.733686 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5d6eca-76be-4473-8c62-b92cd50ba646-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.733719 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d57zw\" (UniqueName: \"kubernetes.io/projected/7b5d6eca-76be-4473-8c62-b92cd50ba646-kube-api-access-d57zw\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.733725 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-run\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.733745 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-etc-nvme\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734151 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734187 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf0554fa-2bf2-45d4-a620-7445764b693d-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734205 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-sys\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734252 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734284 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734307 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cf0554fa-2bf2-45d4-a620-7445764b693d-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734342 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-run\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734361 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734377 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b5d6eca-76be-4473-8c62-b92cd50ba646-config-data-custom\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734401 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0554fa-2bf2-45d4-a620-7445764b693d-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734445 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734460 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf0554fa-2bf2-45d4-a620-7445764b693d-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734488 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734551 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734571 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-sys\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734599 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734662 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8gsh\" (UniqueName: \"kubernetes.io/projected/cf0554fa-2bf2-45d4-a620-7445764b693d-kube-api-access-v8gsh\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734699 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734734 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-dev\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734768 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734784 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5d6eca-76be-4473-8c62-b92cd50ba646-config-data\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734821 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734840 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf0554fa-2bf2-45d4-a620-7445764b693d-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734870 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b5d6eca-76be-4473-8c62-b92cd50ba646-scripts\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734910 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-lib-modules\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.734933 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.735072 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.735110 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-run\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.735158 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.737210 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.737273 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.737282 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-sys\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.737526 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.737798 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.737800 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.733874 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-etc-nvme\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.733916 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-dev\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.737888 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.737914 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-sys\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.737936 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.738086 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.739255 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.739329 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-dev\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.739384 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf0554fa-2bf2-45d4-a620-7445764b693d-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.739418 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b5d6eca-76be-4473-8c62-b92cd50ba646-lib-modules\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.742353 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7b5d6eca-76be-4473-8c62-b92cd50ba646-ceph\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.742399 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0554fa-2bf2-45d4-a620-7445764b693d-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.742362 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf0554fa-2bf2-45d4-a620-7445764b693d-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.743803 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cf0554fa-2bf2-45d4-a620-7445764b693d-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.745989 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b5d6eca-76be-4473-8c62-b92cd50ba646-scripts\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.746382 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b5d6eca-76be-4473-8c62-b92cd50ba646-config-data-custom\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.747237 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf0554fa-2bf2-45d4-a620-7445764b693d-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.748962 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5d6eca-76be-4473-8c62-b92cd50ba646-config-data\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.751301 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5d6eca-76be-4473-8c62-b92cd50ba646-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.757303 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf0554fa-2bf2-45d4-a620-7445764b693d-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.759417 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8gsh\" (UniqueName: \"kubernetes.io/projected/cf0554fa-2bf2-45d4-a620-7445764b693d-kube-api-access-v8gsh\") pod \"cinder-volume-volume1-0\" (UID: \"cf0554fa-2bf2-45d4-a620-7445764b693d\") " pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.761148 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d57zw\" (UniqueName: \"kubernetes.io/projected/7b5d6eca-76be-4473-8c62-b92cd50ba646-kube-api-access-d57zw\") pod \"cinder-backup-0\" (UID: \"7b5d6eca-76be-4473-8c62-b92cd50ba646\") " pod="openstack/cinder-backup-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.845389 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:54 crc kubenswrapper[4802]: I1004 05:41:54.874962 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.229354 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-cps8q"] Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.231712 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-cps8q" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.259388 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-cps8q"] Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.348696 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhkp8\" (UniqueName: \"kubernetes.io/projected/516ec0fa-3ee1-4110-82c3-2f6b480671e0-kube-api-access-zhkp8\") pod \"manila-db-create-cps8q\" (UID: \"516ec0fa-3ee1-4110-82c3-2f6b480671e0\") " pod="openstack/manila-db-create-cps8q" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.372942 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-bcc786889-f5krn"] Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.374360 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.378138 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.378252 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-g9s6j" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.378269 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.378384 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.393054 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bcc786889-f5krn"] Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.400788 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.403996 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.406460 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.406850 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.407071 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-x45mr" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.407204 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.440026 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.442242 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.451225 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.452087 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-config-data\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.452306 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhkp8\" (UniqueName: \"kubernetes.io/projected/516ec0fa-3ee1-4110-82c3-2f6b480671e0-kube-api-access-zhkp8\") pod \"manila-db-create-cps8q\" (UID: \"516ec0fa-3ee1-4110-82c3-2f6b480671e0\") " pod="openstack/manila-db-create-cps8q" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.452467 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnvb4\" (UniqueName: \"kubernetes.io/projected/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-kube-api-access-fnvb4\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.452504 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-horizon-secret-key\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.452531 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-scripts\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.452623 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-logs\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.463911 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.464103 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.478711 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:41:55 crc kubenswrapper[4802]: E1004 05:41:55.489796 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceph combined-ca-bundle config-data glance httpd-run kube-api-access-l9c67 logs public-tls-certs scripts], unattached volumes=[], failed to process volumes=[ceph combined-ca-bundle config-data glance httpd-run kube-api-access-l9c67 logs public-tls-certs scripts]: context canceled" pod="openstack/glance-default-external-api-0" podUID="48c7792e-77ae-4237-9296-075304dd36a5" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.500752 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhkp8\" (UniqueName: \"kubernetes.io/projected/516ec0fa-3ee1-4110-82c3-2f6b480671e0-kube-api-access-zhkp8\") pod \"manila-db-create-cps8q\" (UID: \"516ec0fa-3ee1-4110-82c3-2f6b480671e0\") " pod="openstack/manila-db-create-cps8q" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.518092 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6676d6749-xf6gp"] Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.520442 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.556412 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6676d6749-xf6gp"] Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.556986 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.557043 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.557087 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.557133 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnvb4\" (UniqueName: \"kubernetes.io/projected/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-kube-api-access-fnvb4\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.557166 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-horizon-secret-key\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.557191 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-scripts\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.557246 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.557289 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.557316 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d6sf\" (UniqueName: \"kubernetes.io/projected/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-kube-api-access-2d6sf\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.557352 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-logs\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.557374 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-logs\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.557397 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.557433 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-config-data\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.557470 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.558000 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-scripts\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.558027 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-logs\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.559312 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-config-data\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.563535 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-cps8q" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.569159 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-horizon-secret-key\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.577705 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.578336 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnvb4\" (UniqueName: \"kubernetes.io/projected/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-kube-api-access-fnvb4\") pod \"horizon-bcc786889-f5krn\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.653826 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.660810 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48c7792e-77ae-4237-9296-075304dd36a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.660854 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.660878 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/48c7792e-77ae-4237-9296-075304dd36a5-ceph\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.660894 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48c7792e-77ae-4237-9296-075304dd36a5-logs\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.660911 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-config-data\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.660933 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.660954 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-horizon-secret-key\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.660989 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.661016 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.661056 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-logs\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.661078 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-scripts\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.661112 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.661136 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.661153 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.661172 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d6sf\" (UniqueName: \"kubernetes.io/projected/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-kube-api-access-2d6sf\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.661203 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-logs\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.661223 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.661248 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.668226 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.668265 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.668511 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.668516 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.668851 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lshcg\" (UniqueName: \"kubernetes.io/projected/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-kube-api-access-lshcg\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.668905 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.668951 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9c67\" (UniqueName: \"kubernetes.io/projected/48c7792e-77ae-4237-9296-075304dd36a5-kube-api-access-l9c67\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.670166 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.670366 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-logs\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.674023 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.678355 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.680310 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.681835 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.700548 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d6sf\" (UniqueName: \"kubernetes.io/projected/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-kube-api-access-2d6sf\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.712190 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.717432 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.753623 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.776315 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.777321 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"cf0554fa-2bf2-45d4-a620-7445764b693d","Type":"ContainerStarted","Data":"37f8ae7334efd33bf171a92679180a3301edc49b5ffab9b8a44d33bf6fa17917"} Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.789310 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lshcg\" (UniqueName: \"kubernetes.io/projected/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-kube-api-access-lshcg\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.789378 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.789413 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9c67\" (UniqueName: \"kubernetes.io/projected/48c7792e-77ae-4237-9296-075304dd36a5-kube-api-access-l9c67\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.789578 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48c7792e-77ae-4237-9296-075304dd36a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.789621 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/48c7792e-77ae-4237-9296-075304dd36a5-ceph\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.789669 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48c7792e-77ae-4237-9296-075304dd36a5-logs\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.789690 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-config-data\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.789727 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-horizon-secret-key\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.789774 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.789850 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-logs\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.789878 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-scripts\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.789991 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.790122 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.790221 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.790430 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.790945 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48c7792e-77ae-4237-9296-075304dd36a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.791207 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-logs\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.791477 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-config-data\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.791908 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48c7792e-77ae-4237-9296-075304dd36a5-logs\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.792958 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-scripts\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.796080 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/48c7792e-77ae-4237-9296-075304dd36a5-ceph\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.799931 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.800953 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.803804 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-horizon-secret-key\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.806505 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.810600 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lshcg\" (UniqueName: \"kubernetes.io/projected/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-kube-api-access-lshcg\") pod \"horizon-6676d6749-xf6gp\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.811627 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.829178 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9c67\" (UniqueName: \"kubernetes.io/projected/48c7792e-77ae-4237-9296-075304dd36a5-kube-api-access-l9c67\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.844893 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:41:55 crc kubenswrapper[4802]: I1004 05:41:55.868338 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.003799 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 04 05:41:56 crc kubenswrapper[4802]: W1004 05:41:56.037912 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod516ec0fa_3ee1_4110_82c3_2f6b480671e0.slice/crio-9617d5ec30ccff6338cb30f9ffc46b36fbe6eea99c46a6f1a0f77b53298cd5c8 WatchSource:0}: Error finding container 9617d5ec30ccff6338cb30f9ffc46b36fbe6eea99c46a6f1a0f77b53298cd5c8: Status 404 returned error can't find the container with id 9617d5ec30ccff6338cb30f9ffc46b36fbe6eea99c46a6f1a0f77b53298cd5c8 Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.040215 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-cps8q"] Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.098052 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48c7792e-77ae-4237-9296-075304dd36a5-logs\") pod \"48c7792e-77ae-4237-9296-075304dd36a5\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.098346 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/48c7792e-77ae-4237-9296-075304dd36a5-ceph\") pod \"48c7792e-77ae-4237-9296-075304dd36a5\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.098389 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48c7792e-77ae-4237-9296-075304dd36a5-httpd-run\") pod \"48c7792e-77ae-4237-9296-075304dd36a5\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.098419 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-config-data\") pod \"48c7792e-77ae-4237-9296-075304dd36a5\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.098445 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9c67\" (UniqueName: \"kubernetes.io/projected/48c7792e-77ae-4237-9296-075304dd36a5-kube-api-access-l9c67\") pod \"48c7792e-77ae-4237-9296-075304dd36a5\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.098476 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"48c7792e-77ae-4237-9296-075304dd36a5\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.098512 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-public-tls-certs\") pod \"48c7792e-77ae-4237-9296-075304dd36a5\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.098535 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-combined-ca-bundle\") pod \"48c7792e-77ae-4237-9296-075304dd36a5\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.098588 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-scripts\") pod \"48c7792e-77ae-4237-9296-075304dd36a5\" (UID: \"48c7792e-77ae-4237-9296-075304dd36a5\") " Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.103165 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-scripts" (OuterVolumeSpecName: "scripts") pod "48c7792e-77ae-4237-9296-075304dd36a5" (UID: "48c7792e-77ae-4237-9296-075304dd36a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.103799 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48c7792e-77ae-4237-9296-075304dd36a5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "48c7792e-77ae-4237-9296-075304dd36a5" (UID: "48c7792e-77ae-4237-9296-075304dd36a5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.103807 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-config-data" (OuterVolumeSpecName: "config-data") pod "48c7792e-77ae-4237-9296-075304dd36a5" (UID: "48c7792e-77ae-4237-9296-075304dd36a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.104290 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48c7792e-77ae-4237-9296-075304dd36a5-logs" (OuterVolumeSpecName: "logs") pod "48c7792e-77ae-4237-9296-075304dd36a5" (UID: "48c7792e-77ae-4237-9296-075304dd36a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.104333 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "48c7792e-77ae-4237-9296-075304dd36a5" (UID: "48c7792e-77ae-4237-9296-075304dd36a5"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.105390 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c7792e-77ae-4237-9296-075304dd36a5-ceph" (OuterVolumeSpecName: "ceph") pod "48c7792e-77ae-4237-9296-075304dd36a5" (UID: "48c7792e-77ae-4237-9296-075304dd36a5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.108498 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48c7792e-77ae-4237-9296-075304dd36a5" (UID: "48c7792e-77ae-4237-9296-075304dd36a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.110024 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c7792e-77ae-4237-9296-075304dd36a5-kube-api-access-l9c67" (OuterVolumeSpecName: "kube-api-access-l9c67") pod "48c7792e-77ae-4237-9296-075304dd36a5" (UID: "48c7792e-77ae-4237-9296-075304dd36a5"). InnerVolumeSpecName "kube-api-access-l9c67". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.110328 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "48c7792e-77ae-4237-9296-075304dd36a5" (UID: "48c7792e-77ae-4237-9296-075304dd36a5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.200442 4802 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.200472 4802 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.200483 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.200493 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.200502 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48c7792e-77ae-4237-9296-075304dd36a5-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.200512 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/48c7792e-77ae-4237-9296-075304dd36a5-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.200520 4802 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/48c7792e-77ae-4237-9296-075304dd36a5-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.200527 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48c7792e-77ae-4237-9296-075304dd36a5-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.200535 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9c67\" (UniqueName: \"kubernetes.io/projected/48c7792e-77ae-4237-9296-075304dd36a5-kube-api-access-l9c67\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.225684 4802 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.258716 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bcc786889-f5krn"] Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.301728 4802 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:56 crc kubenswrapper[4802]: W1004 05:41:56.391324 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eaeda23_f357_4c31_8cd4_3f41d6a33a70.slice/crio-9e87ab98bf0978ce3b63143a1b7c60110b7844a30ee4969ad329de62a9dde379 WatchSource:0}: Error finding container 9e87ab98bf0978ce3b63143a1b7c60110b7844a30ee4969ad329de62a9dde379: Status 404 returned error can't find the container with id 9e87ab98bf0978ce3b63143a1b7c60110b7844a30ee4969ad329de62a9dde379 Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.395288 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6676d6749-xf6gp"] Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.424670 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 04 05:41:56 crc kubenswrapper[4802]: W1004 05:41:56.425798 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8016b3f_0db5_4de5_8ec8_8210cc4f2195.slice/crio-71d3a5d396c402888adece42a2d1688e2cda77b2ddf240e707155fa720016732 WatchSource:0}: Error finding container 71d3a5d396c402888adece42a2d1688e2cda77b2ddf240e707155fa720016732: Status 404 returned error can't find the container with id 71d3a5d396c402888adece42a2d1688e2cda77b2ddf240e707155fa720016732 Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.795815 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.800330 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6676d6749-xf6gp" event={"ID":"0eaeda23-f357-4c31-8cd4-3f41d6a33a70","Type":"ContainerStarted","Data":"9e87ab98bf0978ce3b63143a1b7c60110b7844a30ee4969ad329de62a9dde379"} Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.806948 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bcc786889-f5krn" event={"ID":"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d","Type":"ContainerStarted","Data":"e845a7cfececcf0ce4aef1f7d5ba13d8cf79ba9bc2736e83ebdb4d234ec8c6ec"} Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.811807 4802 generic.go:334] "Generic (PLEG): container finished" podID="516ec0fa-3ee1-4110-82c3-2f6b480671e0" containerID="fa86f81162c63aaa7829ad4a5a931b123c7a0503266476d50af44dc34703817d" exitCode=0 Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.811865 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-cps8q" event={"ID":"516ec0fa-3ee1-4110-82c3-2f6b480671e0","Type":"ContainerDied","Data":"fa86f81162c63aaa7829ad4a5a931b123c7a0503266476d50af44dc34703817d"} Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.811886 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-cps8q" event={"ID":"516ec0fa-3ee1-4110-82c3-2f6b480671e0","Type":"ContainerStarted","Data":"9617d5ec30ccff6338cb30f9ffc46b36fbe6eea99c46a6f1a0f77b53298cd5c8"} Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.832846 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.833740 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c8016b3f-0db5-4de5-8ec8-8210cc4f2195","Type":"ContainerStarted","Data":"71d3a5d396c402888adece42a2d1688e2cda77b2ddf240e707155fa720016732"} Oct 04 05:41:56 crc kubenswrapper[4802]: I1004 05:41:56.980733 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.014862 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.039805 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.041672 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.046598 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.046879 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.048353 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.220378 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-ceph\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.220750 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.220773 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-config-data\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.220841 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.220900 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-logs\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.220934 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zkpx\" (UniqueName: \"kubernetes.io/projected/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-kube-api-access-9zkpx\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.220985 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.221188 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-scripts\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.221328 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.323154 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zkpx\" (UniqueName: \"kubernetes.io/projected/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-kube-api-access-9zkpx\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.323223 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.323253 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-scripts\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.323297 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.323325 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-ceph\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.323348 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.323366 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-config-data\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.323409 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.323457 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-logs\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.324328 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-logs\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.325263 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.325305 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.334275 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.334279 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-config-data\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.338873 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-ceph\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.343691 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-scripts\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.344014 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.349601 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zkpx\" (UniqueName: \"kubernetes.io/projected/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-kube-api-access-9zkpx\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.388143 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.681512 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 04 05:41:57 crc kubenswrapper[4802]: I1004 05:41:57.854631 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c8016b3f-0db5-4de5-8ec8-8210cc4f2195","Type":"ContainerStarted","Data":"50b87224066add82b474254ed6e56eff82a6e703cfacee9d1ca8039a13f2c8fc"} Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:57.902000 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"cf0554fa-2bf2-45d4-a620-7445764b693d","Type":"ContainerStarted","Data":"55d284ace8b3378b69103627c81668761e05bdf550f7ab57aa2204d406cd9e97"} Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:57.902138 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"cf0554fa-2bf2-45d4-a620-7445764b693d","Type":"ContainerStarted","Data":"4f443c4d4ba49aa05d91747a22f94f95eaba583a2aa68cc2851480783c483bbe"} Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:57.905031 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"7b5d6eca-76be-4473-8c62-b92cd50ba646","Type":"ContainerStarted","Data":"69d8f7e427d29952fdb9dde0ca0d54125d37d8cabb9435dcac9a8b32a3f78077"} Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:57.939508 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.016844092 podStartE2EDuration="3.939487588s" podCreationTimestamp="2025-10-04 05:41:54 +0000 UTC" firstStartedPulling="2025-10-04 05:41:55.653328941 +0000 UTC m=+3358.061329556" lastFinishedPulling="2025-10-04 05:41:56.575972427 +0000 UTC m=+3358.983973052" observedRunningTime="2025-10-04 05:41:57.925950039 +0000 UTC m=+3360.333950674" watchObservedRunningTime="2025-10-04 05:41:57.939487588 +0000 UTC m=+3360.347488213" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.093466 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6676d6749-xf6gp"] Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.162706 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fdf6cb5fb-shljs"] Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.164261 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.176593 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.194271 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fdf6cb5fb-shljs"] Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.213240 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.276986 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7638b318-b144-4dea-9a8c-6a694fce84a2-scripts\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.277046 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-horizon-tls-certs\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.277070 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-combined-ca-bundle\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.277104 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spq46\" (UniqueName: \"kubernetes.io/projected/7638b318-b144-4dea-9a8c-6a694fce84a2-kube-api-access-spq46\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.277119 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-horizon-secret-key\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.277174 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7638b318-b144-4dea-9a8c-6a694fce84a2-config-data\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.277310 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7638b318-b144-4dea-9a8c-6a694fce84a2-logs\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.318113 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.381511 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7638b318-b144-4dea-9a8c-6a694fce84a2-config-data\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.381839 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7638b318-b144-4dea-9a8c-6a694fce84a2-logs\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.381882 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7638b318-b144-4dea-9a8c-6a694fce84a2-scripts\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.381920 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-horizon-tls-certs\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.381944 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-combined-ca-bundle\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.381974 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spq46\" (UniqueName: \"kubernetes.io/projected/7638b318-b144-4dea-9a8c-6a694fce84a2-kube-api-access-spq46\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.381990 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-horizon-secret-key\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.386318 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48c7792e-77ae-4237-9296-075304dd36a5" path="/var/lib/kubelet/pods/48c7792e-77ae-4237-9296-075304dd36a5/volumes" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.389103 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-horizon-secret-key\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.390997 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7638b318-b144-4dea-9a8c-6a694fce84a2-config-data\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.391054 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7638b318-b144-4dea-9a8c-6a694fce84a2-scripts\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.391308 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7638b318-b144-4dea-9a8c-6a694fce84a2-logs\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.392049 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.411178 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-horizon-tls-certs\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.420089 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-combined-ca-bundle\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.422966 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bcc786889-f5krn"] Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.436459 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-58d88cc67b-v6jgr"] Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.443522 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spq46\" (UniqueName: \"kubernetes.io/projected/7638b318-b144-4dea-9a8c-6a694fce84a2-kube-api-access-spq46\") pod \"horizon-5fdf6cb5fb-shljs\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.447959 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.468702 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58d88cc67b-v6jgr"] Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.525436 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.586346 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9cb164b-15ee-488d-ae7b-cc74da075072-combined-ca-bundle\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.586733 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9cb164b-15ee-488d-ae7b-cc74da075072-config-data\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.586764 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9cb164b-15ee-488d-ae7b-cc74da075072-scripts\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.586798 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9cb164b-15ee-488d-ae7b-cc74da075072-horizon-tls-certs\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.586880 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4xwv\" (UniqueName: \"kubernetes.io/projected/c9cb164b-15ee-488d-ae7b-cc74da075072-kube-api-access-p4xwv\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.586916 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9cb164b-15ee-488d-ae7b-cc74da075072-logs\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.586964 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9cb164b-15ee-488d-ae7b-cc74da075072-horizon-secret-key\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.688249 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9cb164b-15ee-488d-ae7b-cc74da075072-logs\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.688333 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9cb164b-15ee-488d-ae7b-cc74da075072-horizon-secret-key\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.688396 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9cb164b-15ee-488d-ae7b-cc74da075072-combined-ca-bundle\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.688470 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9cb164b-15ee-488d-ae7b-cc74da075072-config-data\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.688503 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9cb164b-15ee-488d-ae7b-cc74da075072-scripts\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.688537 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9cb164b-15ee-488d-ae7b-cc74da075072-horizon-tls-certs\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.688599 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4xwv\" (UniqueName: \"kubernetes.io/projected/c9cb164b-15ee-488d-ae7b-cc74da075072-kube-api-access-p4xwv\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.688815 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9cb164b-15ee-488d-ae7b-cc74da075072-logs\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.689740 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9cb164b-15ee-488d-ae7b-cc74da075072-scripts\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.689926 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9cb164b-15ee-488d-ae7b-cc74da075072-config-data\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.710836 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9cb164b-15ee-488d-ae7b-cc74da075072-combined-ca-bundle\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.711448 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9cb164b-15ee-488d-ae7b-cc74da075072-horizon-tls-certs\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.715007 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4xwv\" (UniqueName: \"kubernetes.io/projected/c9cb164b-15ee-488d-ae7b-cc74da075072-kube-api-access-p4xwv\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.715441 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9cb164b-15ee-488d-ae7b-cc74da075072-horizon-secret-key\") pod \"horizon-58d88cc67b-v6jgr\" (UID: \"c9cb164b-15ee-488d-ae7b-cc74da075072\") " pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.863878 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.957242 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"7b5d6eca-76be-4473-8c62-b92cd50ba646","Type":"ContainerStarted","Data":"49f7e8b669692d6e37cf8e027747e43a73270511bc8c2c2dc25da1d499a4b07e"} Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.957516 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"7b5d6eca-76be-4473-8c62-b92cd50ba646","Type":"ContainerStarted","Data":"63a4be7c7d472d334d7cd4792bbeea490f82918c419c12686aa9855101642577"} Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.966877 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c8016b3f-0db5-4de5-8ec8-8210cc4f2195" containerName="glance-log" containerID="cri-o://50b87224066add82b474254ed6e56eff82a6e703cfacee9d1ca8039a13f2c8fc" gracePeriod=30 Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.966949 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c8016b3f-0db5-4de5-8ec8-8210cc4f2195","Type":"ContainerStarted","Data":"d8de1a458a1301129829262d896471cd07ddd1faa6200b2539d98b22237e6285"} Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.966998 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c8016b3f-0db5-4de5-8ec8-8210cc4f2195" containerName="glance-httpd" containerID="cri-o://d8de1a458a1301129829262d896471cd07ddd1faa6200b2539d98b22237e6285" gracePeriod=30 Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:58.983921 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.173874382 podStartE2EDuration="4.983904665s" podCreationTimestamp="2025-10-04 05:41:54 +0000 UTC" firstStartedPulling="2025-10-04 05:41:56.846491762 +0000 UTC m=+3359.254492387" lastFinishedPulling="2025-10-04 05:41:57.656522035 +0000 UTC m=+3360.064522670" observedRunningTime="2025-10-04 05:41:58.982241638 +0000 UTC m=+3361.390242263" watchObservedRunningTime="2025-10-04 05:41:58.983904665 +0000 UTC m=+3361.391905290" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.009942 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.009918253 podStartE2EDuration="4.009918253s" podCreationTimestamp="2025-10-04 05:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:41:59.001783775 +0000 UTC m=+3361.409784410" watchObservedRunningTime="2025-10-04 05:41:59.009918253 +0000 UTC m=+3361.417918898" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.592524 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-cps8q" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.620915 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58d88cc67b-v6jgr"] Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.661740 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.671147 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fdf6cb5fb-shljs"] Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.713139 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhkp8\" (UniqueName: \"kubernetes.io/projected/516ec0fa-3ee1-4110-82c3-2f6b480671e0-kube-api-access-zhkp8\") pod \"516ec0fa-3ee1-4110-82c3-2f6b480671e0\" (UID: \"516ec0fa-3ee1-4110-82c3-2f6b480671e0\") " Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.722893 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/516ec0fa-3ee1-4110-82c3-2f6b480671e0-kube-api-access-zhkp8" (OuterVolumeSpecName: "kube-api-access-zhkp8") pod "516ec0fa-3ee1-4110-82c3-2f6b480671e0" (UID: "516ec0fa-3ee1-4110-82c3-2f6b480671e0"). InnerVolumeSpecName "kube-api-access-zhkp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.817816 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhkp8\" (UniqueName: \"kubernetes.io/projected/516ec0fa-3ee1-4110-82c3-2f6b480671e0-kube-api-access-zhkp8\") on node \"crc\" DevicePath \"\"" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.847707 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.875607 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.999736 4802 generic.go:334] "Generic (PLEG): container finished" podID="c8016b3f-0db5-4de5-8ec8-8210cc4f2195" containerID="d8de1a458a1301129829262d896471cd07ddd1faa6200b2539d98b22237e6285" exitCode=0 Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.999770 4802 generic.go:334] "Generic (PLEG): container finished" podID="c8016b3f-0db5-4de5-8ec8-8210cc4f2195" containerID="50b87224066add82b474254ed6e56eff82a6e703cfacee9d1ca8039a13f2c8fc" exitCode=143 Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.999820 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c8016b3f-0db5-4de5-8ec8-8210cc4f2195","Type":"ContainerDied","Data":"d8de1a458a1301129829262d896471cd07ddd1faa6200b2539d98b22237e6285"} Oct 04 05:41:59 crc kubenswrapper[4802]: I1004 05:41:59.999850 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c8016b3f-0db5-4de5-8ec8-8210cc4f2195","Type":"ContainerDied","Data":"50b87224066add82b474254ed6e56eff82a6e703cfacee9d1ca8039a13f2c8fc"} Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.001441 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdf6cb5fb-shljs" event={"ID":"7638b318-b144-4dea-9a8c-6a694fce84a2","Type":"ContainerStarted","Data":"d8b48e98bd544c77adaa7d112a4e9d47102f213d7cc2fd8ea7554bda92504420"} Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.005584 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2","Type":"ContainerStarted","Data":"181a914ae5fcd169309aa2daec3f03bf67053f3c35b70a751795e2c2d58d9f08"} Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.007321 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58d88cc67b-v6jgr" event={"ID":"c9cb164b-15ee-488d-ae7b-cc74da075072","Type":"ContainerStarted","Data":"6fadbb0fc08c589a14c4d2eddbcbcc0fa3f9f733fde7c0c23f99f327bf0f9266"} Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.009663 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-cps8q" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.013823 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-cps8q" event={"ID":"516ec0fa-3ee1-4110-82c3-2f6b480671e0","Type":"ContainerDied","Data":"9617d5ec30ccff6338cb30f9ffc46b36fbe6eea99c46a6f1a0f77b53298cd5c8"} Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.013904 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9617d5ec30ccff6338cb30f9ffc46b36fbe6eea99c46a6f1a0f77b53298cd5c8" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.136604 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.235575 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-scripts\") pod \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.236213 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-httpd-run\") pod \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.236308 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-logs\") pod \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.236348 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-internal-tls-certs\") pod \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.236385 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d6sf\" (UniqueName: \"kubernetes.io/projected/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-kube-api-access-2d6sf\") pod \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.236459 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-combined-ca-bundle\") pod \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.236509 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-config-data\") pod \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.236762 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c8016b3f-0db5-4de5-8ec8-8210cc4f2195" (UID: "c8016b3f-0db5-4de5-8ec8-8210cc4f2195"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.236977 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-logs" (OuterVolumeSpecName: "logs") pod "c8016b3f-0db5-4de5-8ec8-8210cc4f2195" (UID: "c8016b3f-0db5-4de5-8ec8-8210cc4f2195"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.237033 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.237188 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-ceph\") pod \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\" (UID: \"c8016b3f-0db5-4de5-8ec8-8210cc4f2195\") " Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.237834 4802 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.237856 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.243083 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-ceph" (OuterVolumeSpecName: "ceph") pod "c8016b3f-0db5-4de5-8ec8-8210cc4f2195" (UID: "c8016b3f-0db5-4de5-8ec8-8210cc4f2195"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.243994 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-scripts" (OuterVolumeSpecName: "scripts") pod "c8016b3f-0db5-4de5-8ec8-8210cc4f2195" (UID: "c8016b3f-0db5-4de5-8ec8-8210cc4f2195"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.245809 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-kube-api-access-2d6sf" (OuterVolumeSpecName: "kube-api-access-2d6sf") pod "c8016b3f-0db5-4de5-8ec8-8210cc4f2195" (UID: "c8016b3f-0db5-4de5-8ec8-8210cc4f2195"). InnerVolumeSpecName "kube-api-access-2d6sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.246400 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "c8016b3f-0db5-4de5-8ec8-8210cc4f2195" (UID: "c8016b3f-0db5-4de5-8ec8-8210cc4f2195"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.289103 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c8016b3f-0db5-4de5-8ec8-8210cc4f2195" (UID: "c8016b3f-0db5-4de5-8ec8-8210cc4f2195"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.291816 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8016b3f-0db5-4de5-8ec8-8210cc4f2195" (UID: "c8016b3f-0db5-4de5-8ec8-8210cc4f2195"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.336058 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-config-data" (OuterVolumeSpecName: "config-data") pod "c8016b3f-0db5-4de5-8ec8-8210cc4f2195" (UID: "c8016b3f-0db5-4de5-8ec8-8210cc4f2195"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.339392 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.339419 4802 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.339436 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d6sf\" (UniqueName: \"kubernetes.io/projected/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-kube-api-access-2d6sf\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.339446 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.339454 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.339483 4802 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.339492 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c8016b3f-0db5-4de5-8ec8-8210cc4f2195-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.360484 4802 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 04 05:42:00 crc kubenswrapper[4802]: I1004 05:42:00.450674 4802 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.023946 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2","Type":"ContainerStarted","Data":"2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d"} Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.027430 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c8016b3f-0db5-4de5-8ec8-8210cc4f2195","Type":"ContainerDied","Data":"71d3a5d396c402888adece42a2d1688e2cda77b2ddf240e707155fa720016732"} Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.027466 4802 scope.go:117] "RemoveContainer" containerID="d8de1a458a1301129829262d896471cd07ddd1faa6200b2539d98b22237e6285" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.027608 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.065914 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.079789 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.084499 4802 scope.go:117] "RemoveContainer" containerID="50b87224066add82b474254ed6e56eff82a6e703cfacee9d1ca8039a13f2c8fc" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.093190 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 04 05:42:01 crc kubenswrapper[4802]: E1004 05:42:01.093826 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8016b3f-0db5-4de5-8ec8-8210cc4f2195" containerName="glance-httpd" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.093846 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8016b3f-0db5-4de5-8ec8-8210cc4f2195" containerName="glance-httpd" Oct 04 05:42:01 crc kubenswrapper[4802]: E1004 05:42:01.093870 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="516ec0fa-3ee1-4110-82c3-2f6b480671e0" containerName="mariadb-database-create" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.093876 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="516ec0fa-3ee1-4110-82c3-2f6b480671e0" containerName="mariadb-database-create" Oct 04 05:42:01 crc kubenswrapper[4802]: E1004 05:42:01.093890 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8016b3f-0db5-4de5-8ec8-8210cc4f2195" containerName="glance-log" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.093895 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8016b3f-0db5-4de5-8ec8-8210cc4f2195" containerName="glance-log" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.094196 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="516ec0fa-3ee1-4110-82c3-2f6b480671e0" containerName="mariadb-database-create" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.094226 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8016b3f-0db5-4de5-8ec8-8210cc4f2195" containerName="glance-httpd" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.094244 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8016b3f-0db5-4de5-8ec8-8210cc4f2195" containerName="glance-log" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.095836 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.097823 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.098117 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.120035 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.176775 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06eadbf9-f32f-4702-9ec7-16ee44f3022e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.176827 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06eadbf9-f32f-4702-9ec7-16ee44f3022e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.176890 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcr5t\" (UniqueName: \"kubernetes.io/projected/06eadbf9-f32f-4702-9ec7-16ee44f3022e-kube-api-access-kcr5t\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.176916 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06eadbf9-f32f-4702-9ec7-16ee44f3022e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.176937 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06eadbf9-f32f-4702-9ec7-16ee44f3022e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.176986 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06eadbf9-f32f-4702-9ec7-16ee44f3022e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.177027 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/06eadbf9-f32f-4702-9ec7-16ee44f3022e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.177067 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.177138 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06eadbf9-f32f-4702-9ec7-16ee44f3022e-logs\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.279301 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06eadbf9-f32f-4702-9ec7-16ee44f3022e-logs\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.279383 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06eadbf9-f32f-4702-9ec7-16ee44f3022e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.279454 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06eadbf9-f32f-4702-9ec7-16ee44f3022e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.279829 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06eadbf9-f32f-4702-9ec7-16ee44f3022e-logs\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.280418 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcr5t\" (UniqueName: \"kubernetes.io/projected/06eadbf9-f32f-4702-9ec7-16ee44f3022e-kube-api-access-kcr5t\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.280453 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06eadbf9-f32f-4702-9ec7-16ee44f3022e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.280471 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06eadbf9-f32f-4702-9ec7-16ee44f3022e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.280541 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06eadbf9-f32f-4702-9ec7-16ee44f3022e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.280601 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/06eadbf9-f32f-4702-9ec7-16ee44f3022e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.280674 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.281020 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.281757 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06eadbf9-f32f-4702-9ec7-16ee44f3022e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.285854 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06eadbf9-f32f-4702-9ec7-16ee44f3022e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.289785 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06eadbf9-f32f-4702-9ec7-16ee44f3022e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.301252 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06eadbf9-f32f-4702-9ec7-16ee44f3022e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.335747 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06eadbf9-f32f-4702-9ec7-16ee44f3022e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.336628 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/06eadbf9-f32f-4702-9ec7-16ee44f3022e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.347487 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcr5t\" (UniqueName: \"kubernetes.io/projected/06eadbf9-f32f-4702-9ec7-16ee44f3022e-kube-api-access-kcr5t\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.430920 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"06eadbf9-f32f-4702-9ec7-16ee44f3022e\") " pod="openstack/glance-default-internal-api-0" Oct 04 05:42:01 crc kubenswrapper[4802]: I1004 05:42:01.448076 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.051198 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2","Type":"ContainerStarted","Data":"8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f"} Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.052003 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" containerName="glance-log" containerID="cri-o://2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d" gracePeriod=30 Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.052154 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" containerName="glance-httpd" containerID="cri-o://8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f" gracePeriod=30 Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.098049 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.113991 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.113964933 podStartE2EDuration="6.113964933s" podCreationTimestamp="2025-10-04 05:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:42:02.088817789 +0000 UTC m=+3364.496818414" watchObservedRunningTime="2025-10-04 05:42:02.113964933 +0000 UTC m=+3364.521965568" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.360501 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:42:02 crc kubenswrapper[4802]: E1004 05:42:02.361244 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.380305 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8016b3f-0db5-4de5-8ec8-8210cc4f2195" path="/var/lib/kubelet/pods/c8016b3f-0db5-4de5-8ec8-8210cc4f2195/volumes" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.687254 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.833722 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-public-tls-certs\") pod \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.834149 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-logs\") pod \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.834196 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-scripts\") pod \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.834224 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-combined-ca-bundle\") pod \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.834305 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zkpx\" (UniqueName: \"kubernetes.io/projected/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-kube-api-access-9zkpx\") pod \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.834367 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-httpd-run\") pod \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.834408 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-config-data\") pod \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.834569 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.834675 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-ceph\") pod \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\" (UID: \"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2\") " Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.835526 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-logs" (OuterVolumeSpecName: "logs") pod "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" (UID: "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.835820 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" (UID: "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.841508 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" (UID: "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.843215 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-kube-api-access-9zkpx" (OuterVolumeSpecName: "kube-api-access-9zkpx") pod "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" (UID: "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2"). InnerVolumeSpecName "kube-api-access-9zkpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.848693 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-scripts" (OuterVolumeSpecName: "scripts") pod "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" (UID: "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.852157 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-ceph" (OuterVolumeSpecName: "ceph") pod "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" (UID: "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.877260 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" (UID: "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.906571 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" (UID: "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.913841 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-config-data" (OuterVolumeSpecName: "config-data") pod "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" (UID: "be79fa39-da6a-4c2d-89e8-6fc9f3559cb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.938121 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zkpx\" (UniqueName: \"kubernetes.io/projected/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-kube-api-access-9zkpx\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.938154 4802 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.938164 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.938195 4802 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.938204 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.938213 4802 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.938220 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.938227 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.938235 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:02 crc kubenswrapper[4802]: I1004 05:42:02.961707 4802 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.040050 4802 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.071712 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06eadbf9-f32f-4702-9ec7-16ee44f3022e","Type":"ContainerStarted","Data":"df67ff50104fb36ba730bebdbc1d0ce853705a8fb889c02ad456d22cef9e9fcc"} Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.071761 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06eadbf9-f32f-4702-9ec7-16ee44f3022e","Type":"ContainerStarted","Data":"be3404fb89a34160a4813e2529cc6b81044379957edb88e5a72a67b269f14437"} Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.074523 4802 generic.go:334] "Generic (PLEG): container finished" podID="be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" containerID="8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f" exitCode=143 Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.074548 4802 generic.go:334] "Generic (PLEG): container finished" podID="be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" containerID="2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d" exitCode=143 Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.074566 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2","Type":"ContainerDied","Data":"8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f"} Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.074585 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2","Type":"ContainerDied","Data":"2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d"} Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.074598 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be79fa39-da6a-4c2d-89e8-6fc9f3559cb2","Type":"ContainerDied","Data":"181a914ae5fcd169309aa2daec3f03bf67053f3c35b70a751795e2c2d58d9f08"} Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.074614 4802 scope.go:117] "RemoveContainer" containerID="8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.074682 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.107523 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.122663 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.144392 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:42:03 crc kubenswrapper[4802]: E1004 05:42:03.144889 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" containerName="glance-httpd" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.144907 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" containerName="glance-httpd" Oct 04 05:42:03 crc kubenswrapper[4802]: E1004 05:42:03.144944 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" containerName="glance-log" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.144953 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" containerName="glance-log" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.145191 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" containerName="glance-httpd" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.145216 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" containerName="glance-log" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.146395 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.150661 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.150794 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.154759 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.251046 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b55104-4245-45f1-91ef-c1b4fd6682e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.251113 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3b55104-4245-45f1-91ef-c1b4fd6682e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.251154 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3b55104-4245-45f1-91ef-c1b4fd6682e4-ceph\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.251203 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95pns\" (UniqueName: \"kubernetes.io/projected/c3b55104-4245-45f1-91ef-c1b4fd6682e4-kube-api-access-95pns\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.251273 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b55104-4245-45f1-91ef-c1b4fd6682e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.251370 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.251450 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b55104-4245-45f1-91ef-c1b4fd6682e4-logs\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.251507 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b55104-4245-45f1-91ef-c1b4fd6682e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.251533 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b55104-4245-45f1-91ef-c1b4fd6682e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.353528 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3b55104-4245-45f1-91ef-c1b4fd6682e4-ceph\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.353673 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95pns\" (UniqueName: \"kubernetes.io/projected/c3b55104-4245-45f1-91ef-c1b4fd6682e4-kube-api-access-95pns\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.353730 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b55104-4245-45f1-91ef-c1b4fd6682e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.353763 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.353821 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b55104-4245-45f1-91ef-c1b4fd6682e4-logs\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.353892 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b55104-4245-45f1-91ef-c1b4fd6682e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.353953 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b55104-4245-45f1-91ef-c1b4fd6682e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.354043 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b55104-4245-45f1-91ef-c1b4fd6682e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.354114 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.354500 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b55104-4245-45f1-91ef-c1b4fd6682e4-logs\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.355825 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3b55104-4245-45f1-91ef-c1b4fd6682e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.356517 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3b55104-4245-45f1-91ef-c1b4fd6682e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.360423 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b55104-4245-45f1-91ef-c1b4fd6682e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.361386 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b55104-4245-45f1-91ef-c1b4fd6682e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.362133 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3b55104-4245-45f1-91ef-c1b4fd6682e4-ceph\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.363689 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b55104-4245-45f1-91ef-c1b4fd6682e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.363791 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b55104-4245-45f1-91ef-c1b4fd6682e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.373394 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95pns\" (UniqueName: \"kubernetes.io/projected/c3b55104-4245-45f1-91ef-c1b4fd6682e4-kube-api-access-95pns\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.387515 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"c3b55104-4245-45f1-91ef-c1b4fd6682e4\") " pod="openstack/glance-default-external-api-0" Oct 04 05:42:03 crc kubenswrapper[4802]: I1004 05:42:03.517332 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 04 05:42:04 crc kubenswrapper[4802]: I1004 05:42:04.385496 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be79fa39-da6a-4c2d-89e8-6fc9f3559cb2" path="/var/lib/kubelet/pods/be79fa39-da6a-4c2d-89e8-6fc9f3559cb2/volumes" Oct 04 05:42:05 crc kubenswrapper[4802]: I1004 05:42:05.060030 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Oct 04 05:42:05 crc kubenswrapper[4802]: I1004 05:42:05.098892 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Oct 04 05:42:07 crc kubenswrapper[4802]: I1004 05:42:07.976865 4802 scope.go:117] "RemoveContainer" containerID="2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d" Oct 04 05:42:08 crc kubenswrapper[4802]: I1004 05:42:08.168821 4802 scope.go:117] "RemoveContainer" containerID="8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f" Oct 04 05:42:08 crc kubenswrapper[4802]: E1004 05:42:08.169695 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f\": container with ID starting with 8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f not found: ID does not exist" containerID="8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f" Oct 04 05:42:08 crc kubenswrapper[4802]: I1004 05:42:08.169737 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f"} err="failed to get container status \"8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f\": rpc error: code = NotFound desc = could not find container \"8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f\": container with ID starting with 8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f not found: ID does not exist" Oct 04 05:42:08 crc kubenswrapper[4802]: I1004 05:42:08.169772 4802 scope.go:117] "RemoveContainer" containerID="2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d" Oct 04 05:42:08 crc kubenswrapper[4802]: E1004 05:42:08.170507 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d\": container with ID starting with 2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d not found: ID does not exist" containerID="2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d" Oct 04 05:42:08 crc kubenswrapper[4802]: I1004 05:42:08.170536 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d"} err="failed to get container status \"2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d\": rpc error: code = NotFound desc = could not find container \"2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d\": container with ID starting with 2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d not found: ID does not exist" Oct 04 05:42:08 crc kubenswrapper[4802]: I1004 05:42:08.170553 4802 scope.go:117] "RemoveContainer" containerID="8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f" Oct 04 05:42:08 crc kubenswrapper[4802]: I1004 05:42:08.171484 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f"} err="failed to get container status \"8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f\": rpc error: code = NotFound desc = could not find container \"8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f\": container with ID starting with 8d3aa63ab86a4baefbb28b1c1a2298c3f3c27ec7cef5f2d2cdf5e9ec1bcdd23f not found: ID does not exist" Oct 04 05:42:08 crc kubenswrapper[4802]: I1004 05:42:08.171529 4802 scope.go:117] "RemoveContainer" containerID="2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d" Oct 04 05:42:08 crc kubenswrapper[4802]: I1004 05:42:08.171819 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d"} err="failed to get container status \"2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d\": rpc error: code = NotFound desc = could not find container \"2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d\": container with ID starting with 2c465dfc7e0a20aa98fab23471c11aa2b2abe545d311d3a0ebb1683ffd66dc1d not found: ID does not exist" Oct 04 05:42:08 crc kubenswrapper[4802]: I1004 05:42:08.765628 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.142069 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b55104-4245-45f1-91ef-c1b4fd6682e4","Type":"ContainerStarted","Data":"344137a18aadac731ba1d65eb7bc87f09910b73f066c610e27926b9aa7c3c5bb"} Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.146619 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bcc786889-f5krn" podUID="dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" containerName="horizon-log" containerID="cri-o://4f607adcfbd6cff6ec338788edb9b3bfbd8efea100895b9802f977443e39ca4a" gracePeriod=30 Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.146873 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bcc786889-f5krn" podUID="dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" containerName="horizon" containerID="cri-o://7cc9f318faf2cce6ba5b0acd793033290a136deebd194d0a950bcb21c37ec8a2" gracePeriod=30 Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.146882 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bcc786889-f5krn" event={"ID":"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d","Type":"ContainerStarted","Data":"7cc9f318faf2cce6ba5b0acd793033290a136deebd194d0a950bcb21c37ec8a2"} Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.147002 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bcc786889-f5krn" event={"ID":"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d","Type":"ContainerStarted","Data":"4f607adcfbd6cff6ec338788edb9b3bfbd8efea100895b9802f977443e39ca4a"} Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.154143 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58d88cc67b-v6jgr" event={"ID":"c9cb164b-15ee-488d-ae7b-cc74da075072","Type":"ContainerStarted","Data":"c83be62514bf21f03e3ca197ecb7272a3b664fc7a8e9be34155d96be48df8516"} Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.154212 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58d88cc67b-v6jgr" event={"ID":"c9cb164b-15ee-488d-ae7b-cc74da075072","Type":"ContainerStarted","Data":"ef6adfe0c645bd5f491a7261855ccbf4226f9e837d37b6af6dbe4fdf96c57ca0"} Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.157825 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06eadbf9-f32f-4702-9ec7-16ee44f3022e","Type":"ContainerStarted","Data":"39a6b9ae3d5e1e4302d76e644a8c7c9fea7e24860a1969b21dadf9500a79b22b"} Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.163082 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdf6cb5fb-shljs" event={"ID":"7638b318-b144-4dea-9a8c-6a694fce84a2","Type":"ContainerStarted","Data":"c8ce3eb4eb6868d5edeb2b57be9e9781f9583274c26a5d874c01a4b346236639"} Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.163124 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdf6cb5fb-shljs" event={"ID":"7638b318-b144-4dea-9a8c-6a694fce84a2","Type":"ContainerStarted","Data":"7f111214cc3791fe390db8367b7c6a624f5e41ae2460776e8f52f0ac67c748d2"} Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.173008 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-bcc786889-f5krn" podStartSLOduration=2.296440541 podStartE2EDuration="14.172991902s" podCreationTimestamp="2025-10-04 05:41:55 +0000 UTC" firstStartedPulling="2025-10-04 05:41:56.261282524 +0000 UTC m=+3358.669283149" lastFinishedPulling="2025-10-04 05:42:08.137833875 +0000 UTC m=+3370.545834510" observedRunningTime="2025-10-04 05:42:09.167717194 +0000 UTC m=+3371.575717839" watchObservedRunningTime="2025-10-04 05:42:09.172991902 +0000 UTC m=+3371.580992527" Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.179202 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6676d6749-xf6gp" event={"ID":"0eaeda23-f357-4c31-8cd4-3f41d6a33a70","Type":"ContainerStarted","Data":"36fb1458c5629108699c3fb5154cb55ff3039f63c7f2c758daf9aabbca3211a6"} Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.179247 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6676d6749-xf6gp" event={"ID":"0eaeda23-f357-4c31-8cd4-3f41d6a33a70","Type":"ContainerStarted","Data":"0051bb9eb0bca1d2dab1edde8bc61daeeecf10eda4e3357af5311ffc2339c23b"} Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.179375 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6676d6749-xf6gp" podUID="0eaeda23-f357-4c31-8cd4-3f41d6a33a70" containerName="horizon-log" containerID="cri-o://0051bb9eb0bca1d2dab1edde8bc61daeeecf10eda4e3357af5311ffc2339c23b" gracePeriod=30 Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.179683 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6676d6749-xf6gp" podUID="0eaeda23-f357-4c31-8cd4-3f41d6a33a70" containerName="horizon" containerID="cri-o://36fb1458c5629108699c3fb5154cb55ff3039f63c7f2c758daf9aabbca3211a6" gracePeriod=30 Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.194438 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-58d88cc67b-v6jgr" podStartSLOduration=2.668419634 podStartE2EDuration="11.194417972s" podCreationTimestamp="2025-10-04 05:41:58 +0000 UTC" firstStartedPulling="2025-10-04 05:41:59.644044999 +0000 UTC m=+3362.052045624" lastFinishedPulling="2025-10-04 05:42:08.170043337 +0000 UTC m=+3370.578043962" observedRunningTime="2025-10-04 05:42:09.193011892 +0000 UTC m=+3371.601012517" watchObservedRunningTime="2025-10-04 05:42:09.194417972 +0000 UTC m=+3371.602418597" Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.221617 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.221601083 podStartE2EDuration="8.221601083s" podCreationTimestamp="2025-10-04 05:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:42:09.214399951 +0000 UTC m=+3371.622400586" watchObservedRunningTime="2025-10-04 05:42:09.221601083 +0000 UTC m=+3371.629601708" Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.236580 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5fdf6cb5fb-shljs" podStartSLOduration=2.7851617539999998 podStartE2EDuration="11.236559812s" podCreationTimestamp="2025-10-04 05:41:58 +0000 UTC" firstStartedPulling="2025-10-04 05:41:59.67441648 +0000 UTC m=+3362.082417105" lastFinishedPulling="2025-10-04 05:42:08.125814538 +0000 UTC m=+3370.533815163" observedRunningTime="2025-10-04 05:42:09.233802195 +0000 UTC m=+3371.641802820" watchObservedRunningTime="2025-10-04 05:42:09.236559812 +0000 UTC m=+3371.644560427" Oct 04 05:42:09 crc kubenswrapper[4802]: I1004 05:42:09.269194 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6676d6749-xf6gp" podStartSLOduration=2.555001012 podStartE2EDuration="14.269175155s" podCreationTimestamp="2025-10-04 05:41:55 +0000 UTC" firstStartedPulling="2025-10-04 05:41:56.393892368 +0000 UTC m=+3358.801892993" lastFinishedPulling="2025-10-04 05:42:08.108066511 +0000 UTC m=+3370.516067136" observedRunningTime="2025-10-04 05:42:09.249553986 +0000 UTC m=+3371.657554621" watchObservedRunningTime="2025-10-04 05:42:09.269175155 +0000 UTC m=+3371.677175770" Oct 04 05:42:10 crc kubenswrapper[4802]: I1004 05:42:10.199582 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b55104-4245-45f1-91ef-c1b4fd6682e4","Type":"ContainerStarted","Data":"b430180ff62819e27f1c1ffecf711ff38a246c8cffec5890c3d6260e7cf640a2"} Oct 04 05:42:10 crc kubenswrapper[4802]: I1004 05:42:10.200183 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b55104-4245-45f1-91ef-c1b4fd6682e4","Type":"ContainerStarted","Data":"240730264b89799bdcac6bce90c59cf2737489e33dae6bb1bf970494a11ef237"} Oct 04 05:42:10 crc kubenswrapper[4802]: I1004 05:42:10.228775 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.228737065 podStartE2EDuration="7.228737065s" podCreationTimestamp="2025-10-04 05:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:42:10.224885857 +0000 UTC m=+3372.632886492" watchObservedRunningTime="2025-10-04 05:42:10.228737065 +0000 UTC m=+3372.636737690" Oct 04 05:42:11 crc kubenswrapper[4802]: I1004 05:42:11.448949 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 04 05:42:11 crc kubenswrapper[4802]: I1004 05:42:11.449271 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 04 05:42:11 crc kubenswrapper[4802]: I1004 05:42:11.485032 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 04 05:42:11 crc kubenswrapper[4802]: I1004 05:42:11.503785 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 04 05:42:12 crc kubenswrapper[4802]: I1004 05:42:12.221812 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 04 05:42:12 crc kubenswrapper[4802]: I1004 05:42:12.222255 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 04 05:42:13 crc kubenswrapper[4802]: I1004 05:42:13.518142 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 04 05:42:13 crc kubenswrapper[4802]: I1004 05:42:13.518191 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 04 05:42:13 crc kubenswrapper[4802]: I1004 05:42:13.627270 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 04 05:42:13 crc kubenswrapper[4802]: I1004 05:42:13.632148 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 04 05:42:14 crc kubenswrapper[4802]: I1004 05:42:14.252156 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 04 05:42:14 crc kubenswrapper[4802]: I1004 05:42:14.252479 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 04 05:42:14 crc kubenswrapper[4802]: I1004 05:42:14.542178 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 04 05:42:15 crc kubenswrapper[4802]: I1004 05:42:15.359537 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:42:15 crc kubenswrapper[4802]: E1004 05:42:15.360061 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:42:15 crc kubenswrapper[4802]: I1004 05:42:15.365256 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-ae60-account-create-7bfx8"] Oct 04 05:42:15 crc kubenswrapper[4802]: I1004 05:42:15.366655 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-ae60-account-create-7bfx8" Oct 04 05:42:15 crc kubenswrapper[4802]: I1004 05:42:15.368409 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Oct 04 05:42:15 crc kubenswrapper[4802]: I1004 05:42:15.377822 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-ae60-account-create-7bfx8"] Oct 04 05:42:15 crc kubenswrapper[4802]: I1004 05:42:15.540054 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm7l9\" (UniqueName: \"kubernetes.io/projected/9b9ce204-85cc-4eed-9ca0-9c6c867786ea-kube-api-access-cm7l9\") pod \"manila-ae60-account-create-7bfx8\" (UID: \"9b9ce204-85cc-4eed-9ca0-9c6c867786ea\") " pod="openstack/manila-ae60-account-create-7bfx8" Oct 04 05:42:15 crc kubenswrapper[4802]: I1004 05:42:15.626224 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 04 05:42:15 crc kubenswrapper[4802]: I1004 05:42:15.642230 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm7l9\" (UniqueName: \"kubernetes.io/projected/9b9ce204-85cc-4eed-9ca0-9c6c867786ea-kube-api-access-cm7l9\") pod \"manila-ae60-account-create-7bfx8\" (UID: \"9b9ce204-85cc-4eed-9ca0-9c6c867786ea\") " pod="openstack/manila-ae60-account-create-7bfx8" Oct 04 05:42:15 crc kubenswrapper[4802]: I1004 05:42:15.667272 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm7l9\" (UniqueName: \"kubernetes.io/projected/9b9ce204-85cc-4eed-9ca0-9c6c867786ea-kube-api-access-cm7l9\") pod \"manila-ae60-account-create-7bfx8\" (UID: \"9b9ce204-85cc-4eed-9ca0-9c6c867786ea\") " pod="openstack/manila-ae60-account-create-7bfx8" Oct 04 05:42:15 crc kubenswrapper[4802]: I1004 05:42:15.706738 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-ae60-account-create-7bfx8" Oct 04 05:42:15 crc kubenswrapper[4802]: I1004 05:42:15.714919 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:42:15 crc kubenswrapper[4802]: I1004 05:42:15.848508 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:42:16 crc kubenswrapper[4802]: I1004 05:42:16.201941 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-ae60-account-create-7bfx8"] Oct 04 05:42:16 crc kubenswrapper[4802]: I1004 05:42:16.269118 4802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 04 05:42:16 crc kubenswrapper[4802]: I1004 05:42:16.269351 4802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 04 05:42:16 crc kubenswrapper[4802]: I1004 05:42:16.269166 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-ae60-account-create-7bfx8" event={"ID":"9b9ce204-85cc-4eed-9ca0-9c6c867786ea","Type":"ContainerStarted","Data":"bd159fa8f641b61e98a5e743d6805f85b195ae75388a37451e7e2ac7508d82f3"} Oct 04 05:42:16 crc kubenswrapper[4802]: I1004 05:42:16.756330 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 04 05:42:16 crc kubenswrapper[4802]: I1004 05:42:16.757055 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 04 05:42:17 crc kubenswrapper[4802]: I1004 05:42:17.301348 4802 generic.go:334] "Generic (PLEG): container finished" podID="9b9ce204-85cc-4eed-9ca0-9c6c867786ea" containerID="a159a6d85a674781afb01a384880f47ce171d858f69f1145a3722dc0b2520271" exitCode=0 Oct 04 05:42:17 crc kubenswrapper[4802]: I1004 05:42:17.301452 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-ae60-account-create-7bfx8" event={"ID":"9b9ce204-85cc-4eed-9ca0-9c6c867786ea","Type":"ContainerDied","Data":"a159a6d85a674781afb01a384880f47ce171d858f69f1145a3722dc0b2520271"} Oct 04 05:42:18 crc kubenswrapper[4802]: I1004 05:42:18.527423 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:42:18 crc kubenswrapper[4802]: I1004 05:42:18.527789 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:42:18 crc kubenswrapper[4802]: I1004 05:42:18.528536 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdf6cb5fb-shljs" podUID="7638b318-b144-4dea-9a8c-6a694fce84a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.245:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.245:8443: connect: connection refused" Oct 04 05:42:18 crc kubenswrapper[4802]: I1004 05:42:18.690116 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-ae60-account-create-7bfx8" Oct 04 05:42:18 crc kubenswrapper[4802]: I1004 05:42:18.811590 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm7l9\" (UniqueName: \"kubernetes.io/projected/9b9ce204-85cc-4eed-9ca0-9c6c867786ea-kube-api-access-cm7l9\") pod \"9b9ce204-85cc-4eed-9ca0-9c6c867786ea\" (UID: \"9b9ce204-85cc-4eed-9ca0-9c6c867786ea\") " Oct 04 05:42:18 crc kubenswrapper[4802]: I1004 05:42:18.819671 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9ce204-85cc-4eed-9ca0-9c6c867786ea-kube-api-access-cm7l9" (OuterVolumeSpecName: "kube-api-access-cm7l9") pod "9b9ce204-85cc-4eed-9ca0-9c6c867786ea" (UID: "9b9ce204-85cc-4eed-9ca0-9c6c867786ea"). InnerVolumeSpecName "kube-api-access-cm7l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:42:18 crc kubenswrapper[4802]: I1004 05:42:18.864009 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:42:18 crc kubenswrapper[4802]: I1004 05:42:18.864445 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:42:18 crc kubenswrapper[4802]: I1004 05:42:18.866181 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-58d88cc67b-v6jgr" podUID="c9cb164b-15ee-488d-ae7b-cc74da075072" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Oct 04 05:42:18 crc kubenswrapper[4802]: I1004 05:42:18.913864 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm7l9\" (UniqueName: \"kubernetes.io/projected/9b9ce204-85cc-4eed-9ca0-9c6c867786ea-kube-api-access-cm7l9\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:19 crc kubenswrapper[4802]: I1004 05:42:19.318149 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-ae60-account-create-7bfx8" Oct 04 05:42:19 crc kubenswrapper[4802]: I1004 05:42:19.321690 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-ae60-account-create-7bfx8" event={"ID":"9b9ce204-85cc-4eed-9ca0-9c6c867786ea","Type":"ContainerDied","Data":"bd159fa8f641b61e98a5e743d6805f85b195ae75388a37451e7e2ac7508d82f3"} Oct 04 05:42:19 crc kubenswrapper[4802]: I1004 05:42:19.321736 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd159fa8f641b61e98a5e743d6805f85b195ae75388a37451e7e2ac7508d82f3" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.696039 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-q7cpd"] Oct 04 05:42:20 crc kubenswrapper[4802]: E1004 05:42:20.696431 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9ce204-85cc-4eed-9ca0-9c6c867786ea" containerName="mariadb-account-create" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.696446 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9ce204-85cc-4eed-9ca0-9c6c867786ea" containerName="mariadb-account-create" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.696615 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9ce204-85cc-4eed-9ca0-9c6c867786ea" containerName="mariadb-account-create" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.697193 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.699193 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-q5d4m" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.699625 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.713030 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-q7cpd"] Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.792050 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-job-config-data\") pod \"manila-db-sync-q7cpd\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.792293 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-config-data\") pod \"manila-db-sync-q7cpd\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.792393 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-combined-ca-bundle\") pod \"manila-db-sync-q7cpd\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.792509 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc92k\" (UniqueName: \"kubernetes.io/projected/fe45cab7-328c-41b5-8b99-fdc57a6c3727-kube-api-access-gc92k\") pod \"manila-db-sync-q7cpd\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.893950 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc92k\" (UniqueName: \"kubernetes.io/projected/fe45cab7-328c-41b5-8b99-fdc57a6c3727-kube-api-access-gc92k\") pod \"manila-db-sync-q7cpd\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.894132 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-job-config-data\") pod \"manila-db-sync-q7cpd\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.894165 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-config-data\") pod \"manila-db-sync-q7cpd\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.894223 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-combined-ca-bundle\") pod \"manila-db-sync-q7cpd\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.905328 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-job-config-data\") pod \"manila-db-sync-q7cpd\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.905472 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-combined-ca-bundle\") pod \"manila-db-sync-q7cpd\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.905702 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-config-data\") pod \"manila-db-sync-q7cpd\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:20 crc kubenswrapper[4802]: I1004 05:42:20.913116 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc92k\" (UniqueName: \"kubernetes.io/projected/fe45cab7-328c-41b5-8b99-fdc57a6c3727-kube-api-access-gc92k\") pod \"manila-db-sync-q7cpd\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:21 crc kubenswrapper[4802]: I1004 05:42:21.019433 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:21 crc kubenswrapper[4802]: I1004 05:42:21.553726 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-q7cpd"] Oct 04 05:42:22 crc kubenswrapper[4802]: I1004 05:42:22.342425 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-q7cpd" event={"ID":"fe45cab7-328c-41b5-8b99-fdc57a6c3727","Type":"ContainerStarted","Data":"8ad1ba028b0704e95b171af1e85ec252824b460b4bb726dafbf940242e14f9e5"} Oct 04 05:42:26 crc kubenswrapper[4802]: I1004 05:42:26.386239 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-q7cpd" event={"ID":"fe45cab7-328c-41b5-8b99-fdc57a6c3727","Type":"ContainerStarted","Data":"3a76aff782f46578935394bb3e691f8e1f46db6470201e7aff2f13d04c468a53"} Oct 04 05:42:26 crc kubenswrapper[4802]: I1004 05:42:26.409358 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-q7cpd" podStartSLOduration=2.225008117 podStartE2EDuration="6.409341348s" podCreationTimestamp="2025-10-04 05:42:20 +0000 UTC" firstStartedPulling="2025-10-04 05:42:21.559744818 +0000 UTC m=+3383.967745443" lastFinishedPulling="2025-10-04 05:42:25.744078049 +0000 UTC m=+3388.152078674" observedRunningTime="2025-10-04 05:42:26.404878093 +0000 UTC m=+3388.812878748" watchObservedRunningTime="2025-10-04 05:42:26.409341348 +0000 UTC m=+3388.817341963" Oct 04 05:42:28 crc kubenswrapper[4802]: I1004 05:42:28.529562 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fdf6cb5fb-shljs" podUID="7638b318-b144-4dea-9a8c-6a694fce84a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.245:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.245:8443: connect: connection refused" Oct 04 05:42:30 crc kubenswrapper[4802]: I1004 05:42:30.360085 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:42:30 crc kubenswrapper[4802]: E1004 05:42:30.360914 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:42:30 crc kubenswrapper[4802]: I1004 05:42:30.611364 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:42:32 crc kubenswrapper[4802]: I1004 05:42:32.389935 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-58d88cc67b-v6jgr" Oct 04 05:42:32 crc kubenswrapper[4802]: I1004 05:42:32.457313 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fdf6cb5fb-shljs"] Oct 04 05:42:32 crc kubenswrapper[4802]: I1004 05:42:32.457590 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fdf6cb5fb-shljs" podUID="7638b318-b144-4dea-9a8c-6a694fce84a2" containerName="horizon-log" containerID="cri-o://7f111214cc3791fe390db8367b7c6a624f5e41ae2460776e8f52f0ac67c748d2" gracePeriod=30 Oct 04 05:42:32 crc kubenswrapper[4802]: I1004 05:42:32.457768 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fdf6cb5fb-shljs" podUID="7638b318-b144-4dea-9a8c-6a694fce84a2" containerName="horizon" containerID="cri-o://c8ce3eb4eb6868d5edeb2b57be9e9781f9583274c26a5d874c01a4b346236639" gracePeriod=30 Oct 04 05:42:33 crc kubenswrapper[4802]: I1004 05:42:33.473614 4802 generic.go:334] "Generic (PLEG): container finished" podID="7638b318-b144-4dea-9a8c-6a694fce84a2" containerID="c8ce3eb4eb6868d5edeb2b57be9e9781f9583274c26a5d874c01a4b346236639" exitCode=0 Oct 04 05:42:33 crc kubenswrapper[4802]: I1004 05:42:33.473665 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdf6cb5fb-shljs" event={"ID":"7638b318-b144-4dea-9a8c-6a694fce84a2","Type":"ContainerDied","Data":"c8ce3eb4eb6868d5edeb2b57be9e9781f9583274c26a5d874c01a4b346236639"} Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.546927 4802 generic.go:334] "Generic (PLEG): container finished" podID="dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" containerID="7cc9f318faf2cce6ba5b0acd793033290a136deebd194d0a950bcb21c37ec8a2" exitCode=137 Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.547354 4802 generic.go:334] "Generic (PLEG): container finished" podID="dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" containerID="4f607adcfbd6cff6ec338788edb9b3bfbd8efea100895b9802f977443e39ca4a" exitCode=137 Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.547014 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bcc786889-f5krn" event={"ID":"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d","Type":"ContainerDied","Data":"7cc9f318faf2cce6ba5b0acd793033290a136deebd194d0a950bcb21c37ec8a2"} Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.547477 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bcc786889-f5krn" event={"ID":"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d","Type":"ContainerDied","Data":"4f607adcfbd6cff6ec338788edb9b3bfbd8efea100895b9802f977443e39ca4a"} Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.549859 4802 generic.go:334] "Generic (PLEG): container finished" podID="0eaeda23-f357-4c31-8cd4-3f41d6a33a70" containerID="36fb1458c5629108699c3fb5154cb55ff3039f63c7f2c758daf9aabbca3211a6" exitCode=137 Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.549876 4802 generic.go:334] "Generic (PLEG): container finished" podID="0eaeda23-f357-4c31-8cd4-3f41d6a33a70" containerID="0051bb9eb0bca1d2dab1edde8bc61daeeecf10eda4e3357af5311ffc2339c23b" exitCode=137 Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.549901 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6676d6749-xf6gp" event={"ID":"0eaeda23-f357-4c31-8cd4-3f41d6a33a70","Type":"ContainerDied","Data":"36fb1458c5629108699c3fb5154cb55ff3039f63c7f2c758daf9aabbca3211a6"} Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.549931 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6676d6749-xf6gp" event={"ID":"0eaeda23-f357-4c31-8cd4-3f41d6a33a70","Type":"ContainerDied","Data":"0051bb9eb0bca1d2dab1edde8bc61daeeecf10eda4e3357af5311ffc2339c23b"} Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.721326 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.806095 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-scripts\") pod \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.806278 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnvb4\" (UniqueName: \"kubernetes.io/projected/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-kube-api-access-fnvb4\") pod \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.806396 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-config-data\") pod \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.806425 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-logs\") pod \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.806924 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-horizon-secret-key\") pod \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\" (UID: \"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d\") " Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.806977 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-logs" (OuterVolumeSpecName: "logs") pod "dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" (UID: "dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.807686 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.812583 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-kube-api-access-fnvb4" (OuterVolumeSpecName: "kube-api-access-fnvb4") pod "dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" (UID: "dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d"). InnerVolumeSpecName "kube-api-access-fnvb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.824927 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" (UID: "dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.832061 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-scripts" (OuterVolumeSpecName: "scripts") pod "dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" (UID: "dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.840243 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-config-data" (OuterVolumeSpecName: "config-data") pod "dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" (UID: "dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.909030 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnvb4\" (UniqueName: \"kubernetes.io/projected/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-kube-api-access-fnvb4\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.909065 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.909078 4802 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:39 crc kubenswrapper[4802]: I1004 05:42:39.909090 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.083116 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.215353 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-logs\") pod \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.215463 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-horizon-secret-key\") pod \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.215543 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lshcg\" (UniqueName: \"kubernetes.io/projected/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-kube-api-access-lshcg\") pod \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.215762 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-scripts\") pod \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.215807 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-config-data\") pod \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\" (UID: \"0eaeda23-f357-4c31-8cd4-3f41d6a33a70\") " Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.215917 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-logs" (OuterVolumeSpecName: "logs") pod "0eaeda23-f357-4c31-8cd4-3f41d6a33a70" (UID: "0eaeda23-f357-4c31-8cd4-3f41d6a33a70"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.216467 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.219212 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-kube-api-access-lshcg" (OuterVolumeSpecName: "kube-api-access-lshcg") pod "0eaeda23-f357-4c31-8cd4-3f41d6a33a70" (UID: "0eaeda23-f357-4c31-8cd4-3f41d6a33a70"). InnerVolumeSpecName "kube-api-access-lshcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.219724 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0eaeda23-f357-4c31-8cd4-3f41d6a33a70" (UID: "0eaeda23-f357-4c31-8cd4-3f41d6a33a70"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.243426 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-scripts" (OuterVolumeSpecName: "scripts") pod "0eaeda23-f357-4c31-8cd4-3f41d6a33a70" (UID: "0eaeda23-f357-4c31-8cd4-3f41d6a33a70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.249072 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-config-data" (OuterVolumeSpecName: "config-data") pod "0eaeda23-f357-4c31-8cd4-3f41d6a33a70" (UID: "0eaeda23-f357-4c31-8cd4-3f41d6a33a70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.318260 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lshcg\" (UniqueName: \"kubernetes.io/projected/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-kube-api-access-lshcg\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.318315 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.318335 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.318353 4802 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0eaeda23-f357-4c31-8cd4-3f41d6a33a70-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.563188 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6676d6749-xf6gp" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.563158 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6676d6749-xf6gp" event={"ID":"0eaeda23-f357-4c31-8cd4-3f41d6a33a70","Type":"ContainerDied","Data":"9e87ab98bf0978ce3b63143a1b7c60110b7844a30ee4969ad329de62a9dde379"} Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.563320 4802 scope.go:117] "RemoveContainer" containerID="36fb1458c5629108699c3fb5154cb55ff3039f63c7f2c758daf9aabbca3211a6" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.568007 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bcc786889-f5krn" event={"ID":"dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d","Type":"ContainerDied","Data":"e845a7cfececcf0ce4aef1f7d5ba13d8cf79ba9bc2736e83ebdb4d234ec8c6ec"} Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.568097 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bcc786889-f5krn" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.598490 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6676d6749-xf6gp"] Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.615798 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6676d6749-xf6gp"] Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.618032 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bcc786889-f5krn"] Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.626887 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-bcc786889-f5krn"] Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.781346 4802 scope.go:117] "RemoveContainer" containerID="0051bb9eb0bca1d2dab1edde8bc61daeeecf10eda4e3357af5311ffc2339c23b" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.802758 4802 scope.go:117] "RemoveContainer" containerID="7cc9f318faf2cce6ba5b0acd793033290a136deebd194d0a950bcb21c37ec8a2" Oct 04 05:42:40 crc kubenswrapper[4802]: I1004 05:42:40.987926 4802 scope.go:117] "RemoveContainer" containerID="4f607adcfbd6cff6ec338788edb9b3bfbd8efea100895b9802f977443e39ca4a" Oct 04 05:42:41 crc kubenswrapper[4802]: I1004 05:42:41.585638 4802 generic.go:334] "Generic (PLEG): container finished" podID="fe45cab7-328c-41b5-8b99-fdc57a6c3727" containerID="3a76aff782f46578935394bb3e691f8e1f46db6470201e7aff2f13d04c468a53" exitCode=0 Oct 04 05:42:41 crc kubenswrapper[4802]: I1004 05:42:41.585802 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-q7cpd" event={"ID":"fe45cab7-328c-41b5-8b99-fdc57a6c3727","Type":"ContainerDied","Data":"3a76aff782f46578935394bb3e691f8e1f46db6470201e7aff2f13d04c468a53"} Oct 04 05:42:42 crc kubenswrapper[4802]: I1004 05:42:42.379239 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eaeda23-f357-4c31-8cd4-3f41d6a33a70" path="/var/lib/kubelet/pods/0eaeda23-f357-4c31-8cd4-3f41d6a33a70/volumes" Oct 04 05:42:42 crc kubenswrapper[4802]: I1004 05:42:42.380710 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" path="/var/lib/kubelet/pods/dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d/volumes" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.104546 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.175992 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc92k\" (UniqueName: \"kubernetes.io/projected/fe45cab7-328c-41b5-8b99-fdc57a6c3727-kube-api-access-gc92k\") pod \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.176074 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-combined-ca-bundle\") pod \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.176198 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-config-data\") pod \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.176293 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-job-config-data\") pod \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\" (UID: \"fe45cab7-328c-41b5-8b99-fdc57a6c3727\") " Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.185866 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe45cab7-328c-41b5-8b99-fdc57a6c3727-kube-api-access-gc92k" (OuterVolumeSpecName: "kube-api-access-gc92k") pod "fe45cab7-328c-41b5-8b99-fdc57a6c3727" (UID: "fe45cab7-328c-41b5-8b99-fdc57a6c3727"). InnerVolumeSpecName "kube-api-access-gc92k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.196437 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "fe45cab7-328c-41b5-8b99-fdc57a6c3727" (UID: "fe45cab7-328c-41b5-8b99-fdc57a6c3727"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.205943 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-config-data" (OuterVolumeSpecName: "config-data") pod "fe45cab7-328c-41b5-8b99-fdc57a6c3727" (UID: "fe45cab7-328c-41b5-8b99-fdc57a6c3727"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.209181 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe45cab7-328c-41b5-8b99-fdc57a6c3727" (UID: "fe45cab7-328c-41b5-8b99-fdc57a6c3727"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.279784 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc92k\" (UniqueName: \"kubernetes.io/projected/fe45cab7-328c-41b5-8b99-fdc57a6c3727-kube-api-access-gc92k\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.280117 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.280128 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.280137 4802 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/fe45cab7-328c-41b5-8b99-fdc57a6c3727-job-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.613070 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-q7cpd" event={"ID":"fe45cab7-328c-41b5-8b99-fdc57a6c3727","Type":"ContainerDied","Data":"8ad1ba028b0704e95b171af1e85ec252824b460b4bb726dafbf940242e14f9e5"} Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.613111 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ad1ba028b0704e95b171af1e85ec252824b460b4bb726dafbf940242e14f9e5" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.613174 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-q7cpd" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.949786 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Oct 04 05:42:43 crc kubenswrapper[4802]: E1004 05:42:43.950159 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe45cab7-328c-41b5-8b99-fdc57a6c3727" containerName="manila-db-sync" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.950172 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe45cab7-328c-41b5-8b99-fdc57a6c3727" containerName="manila-db-sync" Oct 04 05:42:43 crc kubenswrapper[4802]: E1004 05:42:43.950188 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eaeda23-f357-4c31-8cd4-3f41d6a33a70" containerName="horizon" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.950194 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eaeda23-f357-4c31-8cd4-3f41d6a33a70" containerName="horizon" Oct 04 05:42:43 crc kubenswrapper[4802]: E1004 05:42:43.950213 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eaeda23-f357-4c31-8cd4-3f41d6a33a70" containerName="horizon-log" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.950218 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eaeda23-f357-4c31-8cd4-3f41d6a33a70" containerName="horizon-log" Oct 04 05:42:43 crc kubenswrapper[4802]: E1004 05:42:43.950239 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" containerName="horizon-log" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.950245 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" containerName="horizon-log" Oct 04 05:42:43 crc kubenswrapper[4802]: E1004 05:42:43.950257 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" containerName="horizon" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.950262 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" containerName="horizon" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.950464 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eaeda23-f357-4c31-8cd4-3f41d6a33a70" containerName="horizon" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.950486 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" containerName="horizon" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.950501 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe45cab7-328c-41b5-8b99-fdc57a6c3727" containerName="manila-db-sync" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.950515 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eaeda23-f357-4c31-8cd4-3f41d6a33a70" containerName="horizon-log" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.950529 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8c2abd-0e64-44b7-b8a2-252fc12d5a1d" containerName="horizon-log" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.951690 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.954799 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.955034 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.955817 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-q5d4m" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.956112 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Oct 04 05:42:43 crc kubenswrapper[4802]: I1004 05:42:43.968495 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.066065 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.067633 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.070846 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.091543 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.094727 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.094829 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-scripts\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.094884 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-config-data\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.094963 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.095091 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5flrz\" (UniqueName: \"kubernetes.io/projected/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-kube-api-access-5flrz\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.095131 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.164140 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-9fvn9"] Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.195523 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198316 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-scripts\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198386 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-config-data\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198455 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198496 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9wdk\" (UniqueName: \"kubernetes.io/projected/0ab39419-3804-460a-942c-7236a8d50aae-kube-api-access-b9wdk\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198521 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ab39419-3804-460a-942c-7236a8d50aae-ceph\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198575 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198614 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5flrz\" (UniqueName: \"kubernetes.io/projected/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-kube-api-access-5flrz\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198667 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198700 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-scripts\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198744 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198784 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0ab39419-3804-460a-942c-7236a8d50aae-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198820 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-config-data\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198841 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ab39419-3804-460a-942c-7236a8d50aae-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.198862 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.204673 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.239837 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-config-data\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.248035 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.248221 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-scripts\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.248976 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.255276 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5flrz\" (UniqueName: \"kubernetes.io/projected/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-kube-api-access-5flrz\") pod \"manila-scheduler-0\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.258689 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-9fvn9"] Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.273585 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303007 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-config-data\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303049 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ab39419-3804-460a-942c-7236a8d50aae-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303065 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303153 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303178 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303210 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303245 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9wdk\" (UniqueName: \"kubernetes.io/projected/0ab39419-3804-460a-942c-7236a8d50aae-kube-api-access-b9wdk\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303266 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ab39419-3804-460a-942c-7236a8d50aae-ceph\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303310 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303329 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-config\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303349 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303382 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-scripts\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303416 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpf65\" (UniqueName: \"kubernetes.io/projected/cee4419e-b442-4f88-8502-dbdafe82e436-kube-api-access-cpf65\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303446 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0ab39419-3804-460a-942c-7236a8d50aae-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.303665 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0ab39419-3804-460a-942c-7236a8d50aae-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.305406 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ab39419-3804-460a-942c-7236a8d50aae-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.308482 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.308821 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-config-data\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.309397 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.310069 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.310163 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.316527 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.320321 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.339323 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ab39419-3804-460a-942c-7236a8d50aae-ceph\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.339512 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-scripts\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.339615 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9wdk\" (UniqueName: \"kubernetes.io/projected/0ab39419-3804-460a-942c-7236a8d50aae-kube-api-access-b9wdk\") pod \"manila-share-share1-0\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.364870 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:42:44 crc kubenswrapper[4802]: E1004 05:42:44.365301 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.393471 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.410621 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.410825 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-config-data\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.410924 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpf65\" (UniqueName: \"kubernetes.io/projected/cee4419e-b442-4f88-8502-dbdafe82e436-kube-api-access-cpf65\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.410943 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p6fs\" (UniqueName: \"kubernetes.io/projected/8c048749-82ff-401c-9d6b-2011b603fc8b-kube-api-access-4p6fs\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.411024 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c048749-82ff-401c-9d6b-2011b603fc8b-etc-machine-id\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.416202 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-scripts\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.416334 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.416377 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.416452 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.416478 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-config-data-custom\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.416684 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.416719 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-config\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.416764 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c048749-82ff-401c-9d6b-2011b603fc8b-logs\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.418960 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.419826 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.420005 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.420942 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-config\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.421464 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cee4419e-b442-4f88-8502-dbdafe82e436-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.439630 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpf65\" (UniqueName: \"kubernetes.io/projected/cee4419e-b442-4f88-8502-dbdafe82e436-kube-api-access-cpf65\") pod \"dnsmasq-dns-76b5fdb995-9fvn9\" (UID: \"cee4419e-b442-4f88-8502-dbdafe82e436\") " pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.502285 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.518186 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-config-data\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.519400 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p6fs\" (UniqueName: \"kubernetes.io/projected/8c048749-82ff-401c-9d6b-2011b603fc8b-kube-api-access-4p6fs\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.519719 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c048749-82ff-401c-9d6b-2011b603fc8b-etc-machine-id\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.519819 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-scripts\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.519934 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-config-data-custom\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.520050 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.520126 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c048749-82ff-401c-9d6b-2011b603fc8b-logs\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.520848 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c048749-82ff-401c-9d6b-2011b603fc8b-logs\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.521766 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c048749-82ff-401c-9d6b-2011b603fc8b-etc-machine-id\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.524191 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-scripts\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.528121 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.528527 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-config-data\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.528668 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-config-data-custom\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.560863 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p6fs\" (UniqueName: \"kubernetes.io/projected/8c048749-82ff-401c-9d6b-2011b603fc8b-kube-api-access-4p6fs\") pod \"manila-api-0\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " pod="openstack/manila-api-0" Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.775292 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 04 05:42:44 crc kubenswrapper[4802]: I1004 05:42:44.814879 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 04 05:42:45 crc kubenswrapper[4802]: I1004 05:42:45.001414 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 04 05:42:45 crc kubenswrapper[4802]: W1004 05:42:45.013887 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ab39419_3804_460a_942c_7236a8d50aae.slice/crio-0846bd70282ed58d3b30cdfede134e927930641bd33b074cfda6a12288e93a49 WatchSource:0}: Error finding container 0846bd70282ed58d3b30cdfede134e927930641bd33b074cfda6a12288e93a49: Status 404 returned error can't find the container with id 0846bd70282ed58d3b30cdfede134e927930641bd33b074cfda6a12288e93a49 Oct 04 05:42:45 crc kubenswrapper[4802]: I1004 05:42:45.025319 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-9fvn9"] Oct 04 05:42:45 crc kubenswrapper[4802]: I1004 05:42:45.418996 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 04 05:42:45 crc kubenswrapper[4802]: I1004 05:42:45.663767 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0ab39419-3804-460a-942c-7236a8d50aae","Type":"ContainerStarted","Data":"0846bd70282ed58d3b30cdfede134e927930641bd33b074cfda6a12288e93a49"} Oct 04 05:42:45 crc kubenswrapper[4802]: I1004 05:42:45.665583 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"15473ce1-4cda-4edf-83f5-56bba2dbcf0c","Type":"ContainerStarted","Data":"d1fb0ad3cd09e49d50cc9c5cb3bb29570ebe05baf17836ddc6f87e518ee0704a"} Oct 04 05:42:45 crc kubenswrapper[4802]: I1004 05:42:45.672762 4802 generic.go:334] "Generic (PLEG): container finished" podID="cee4419e-b442-4f88-8502-dbdafe82e436" containerID="165c25f767827888e45214413da1f52d1161ed836e91bbf056a2dbc45b5ed120" exitCode=0 Oct 04 05:42:45 crc kubenswrapper[4802]: I1004 05:42:45.672833 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" event={"ID":"cee4419e-b442-4f88-8502-dbdafe82e436","Type":"ContainerDied","Data":"165c25f767827888e45214413da1f52d1161ed836e91bbf056a2dbc45b5ed120"} Oct 04 05:42:45 crc kubenswrapper[4802]: I1004 05:42:45.672855 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" event={"ID":"cee4419e-b442-4f88-8502-dbdafe82e436","Type":"ContainerStarted","Data":"c1c6e6fa93d95522700e15642fddc1c8db435eeda0c31f19e89d9e7cb8f715ae"} Oct 04 05:42:45 crc kubenswrapper[4802]: I1004 05:42:45.680786 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"8c048749-82ff-401c-9d6b-2011b603fc8b","Type":"ContainerStarted","Data":"bf9d26b2a032854257677cd46aa2ddf297fce0e32f6a530771e9ee546bb5d990"} Oct 04 05:42:46 crc kubenswrapper[4802]: I1004 05:42:46.701545 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"8c048749-82ff-401c-9d6b-2011b603fc8b","Type":"ContainerStarted","Data":"5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb"} Oct 04 05:42:46 crc kubenswrapper[4802]: I1004 05:42:46.706254 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"15473ce1-4cda-4edf-83f5-56bba2dbcf0c","Type":"ContainerStarted","Data":"04d4cced873ecb5597441d2d4a28b5123c16ee6ae9ac15eed0841ef6ff7e164d"} Oct 04 05:42:46 crc kubenswrapper[4802]: I1004 05:42:46.706286 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"15473ce1-4cda-4edf-83f5-56bba2dbcf0c","Type":"ContainerStarted","Data":"069bec6400f2cf051c410edc1aaf930847a36235dff7a63a36443af5e686db58"} Oct 04 05:42:46 crc kubenswrapper[4802]: I1004 05:42:46.719227 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" event={"ID":"cee4419e-b442-4f88-8502-dbdafe82e436","Type":"ContainerStarted","Data":"7e35c41235969ec437dea5d2b20b2a2e6926de3c1179105a6f37b5652e2481bb"} Oct 04 05:42:46 crc kubenswrapper[4802]: I1004 05:42:46.720032 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:46 crc kubenswrapper[4802]: I1004 05:42:46.736129 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.048166558 podStartE2EDuration="3.736112682s" podCreationTimestamp="2025-10-04 05:42:43 +0000 UTC" firstStartedPulling="2025-10-04 05:42:44.786939112 +0000 UTC m=+3407.194939737" lastFinishedPulling="2025-10-04 05:42:45.474885246 +0000 UTC m=+3407.882885861" observedRunningTime="2025-10-04 05:42:46.73174221 +0000 UTC m=+3409.139742835" watchObservedRunningTime="2025-10-04 05:42:46.736112682 +0000 UTC m=+3409.144113307" Oct 04 05:42:47 crc kubenswrapper[4802]: I1004 05:42:47.127215 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" podStartSLOduration=3.127195944 podStartE2EDuration="3.127195944s" podCreationTimestamp="2025-10-04 05:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:42:46.774438616 +0000 UTC m=+3409.182439241" watchObservedRunningTime="2025-10-04 05:42:47.127195944 +0000 UTC m=+3409.535196569" Oct 04 05:42:47 crc kubenswrapper[4802]: I1004 05:42:47.133062 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Oct 04 05:42:47 crc kubenswrapper[4802]: I1004 05:42:47.732091 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"8c048749-82ff-401c-9d6b-2011b603fc8b","Type":"ContainerStarted","Data":"c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105"} Oct 04 05:42:48 crc kubenswrapper[4802]: I1004 05:42:48.398350 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.398335699 podStartE2EDuration="4.398335699s" podCreationTimestamp="2025-10-04 05:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:42:47.767058232 +0000 UTC m=+3410.175058857" watchObservedRunningTime="2025-10-04 05:42:48.398335699 +0000 UTC m=+3410.806336324" Oct 04 05:42:48 crc kubenswrapper[4802]: I1004 05:42:48.739625 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Oct 04 05:42:48 crc kubenswrapper[4802]: I1004 05:42:48.739596 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="8c048749-82ff-401c-9d6b-2011b603fc8b" containerName="manila-api-log" containerID="cri-o://5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb" gracePeriod=30 Oct 04 05:42:48 crc kubenswrapper[4802]: I1004 05:42:48.739710 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="8c048749-82ff-401c-9d6b-2011b603fc8b" containerName="manila-api" containerID="cri-o://c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105" gracePeriod=30 Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.612993 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.756974 4802 generic.go:334] "Generic (PLEG): container finished" podID="8c048749-82ff-401c-9d6b-2011b603fc8b" containerID="c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105" exitCode=0 Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.757009 4802 generic.go:334] "Generic (PLEG): container finished" podID="8c048749-82ff-401c-9d6b-2011b603fc8b" containerID="5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb" exitCode=143 Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.757032 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"8c048749-82ff-401c-9d6b-2011b603fc8b","Type":"ContainerDied","Data":"c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105"} Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.757041 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.757062 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"8c048749-82ff-401c-9d6b-2011b603fc8b","Type":"ContainerDied","Data":"5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb"} Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.757075 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"8c048749-82ff-401c-9d6b-2011b603fc8b","Type":"ContainerDied","Data":"bf9d26b2a032854257677cd46aa2ddf297fce0e32f6a530771e9ee546bb5d990"} Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.757090 4802 scope.go:117] "RemoveContainer" containerID="c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.782069 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-scripts\") pod \"8c048749-82ff-401c-9d6b-2011b603fc8b\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.782094 4802 scope.go:117] "RemoveContainer" containerID="5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.782572 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c048749-82ff-401c-9d6b-2011b603fc8b-logs\") pod \"8c048749-82ff-401c-9d6b-2011b603fc8b\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.782715 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-config-data-custom\") pod \"8c048749-82ff-401c-9d6b-2011b603fc8b\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.782778 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-config-data\") pod \"8c048749-82ff-401c-9d6b-2011b603fc8b\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.782834 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c048749-82ff-401c-9d6b-2011b603fc8b-etc-machine-id\") pod \"8c048749-82ff-401c-9d6b-2011b603fc8b\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.782917 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-combined-ca-bundle\") pod \"8c048749-82ff-401c-9d6b-2011b603fc8b\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.782967 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c048749-82ff-401c-9d6b-2011b603fc8b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8c048749-82ff-401c-9d6b-2011b603fc8b" (UID: "8c048749-82ff-401c-9d6b-2011b603fc8b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.783149 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c048749-82ff-401c-9d6b-2011b603fc8b-logs" (OuterVolumeSpecName: "logs") pod "8c048749-82ff-401c-9d6b-2011b603fc8b" (UID: "8c048749-82ff-401c-9d6b-2011b603fc8b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.783621 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p6fs\" (UniqueName: \"kubernetes.io/projected/8c048749-82ff-401c-9d6b-2011b603fc8b-kube-api-access-4p6fs\") pod \"8c048749-82ff-401c-9d6b-2011b603fc8b\" (UID: \"8c048749-82ff-401c-9d6b-2011b603fc8b\") " Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.784120 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c048749-82ff-401c-9d6b-2011b603fc8b-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.784143 4802 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c048749-82ff-401c-9d6b-2011b603fc8b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.807133 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8c048749-82ff-401c-9d6b-2011b603fc8b" (UID: "8c048749-82ff-401c-9d6b-2011b603fc8b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.809850 4802 scope.go:117] "RemoveContainer" containerID="c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.811857 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-scripts" (OuterVolumeSpecName: "scripts") pod "8c048749-82ff-401c-9d6b-2011b603fc8b" (UID: "8c048749-82ff-401c-9d6b-2011b603fc8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.811920 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c048749-82ff-401c-9d6b-2011b603fc8b-kube-api-access-4p6fs" (OuterVolumeSpecName: "kube-api-access-4p6fs") pod "8c048749-82ff-401c-9d6b-2011b603fc8b" (UID: "8c048749-82ff-401c-9d6b-2011b603fc8b"). InnerVolumeSpecName "kube-api-access-4p6fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:42:49 crc kubenswrapper[4802]: E1004 05:42:49.818815 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105\": container with ID starting with c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105 not found: ID does not exist" containerID="c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.818868 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105"} err="failed to get container status \"c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105\": rpc error: code = NotFound desc = could not find container \"c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105\": container with ID starting with c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105 not found: ID does not exist" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.818899 4802 scope.go:117] "RemoveContainer" containerID="5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb" Oct 04 05:42:49 crc kubenswrapper[4802]: E1004 05:42:49.819660 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb\": container with ID starting with 5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb not found: ID does not exist" containerID="5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.819686 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb"} err="failed to get container status \"5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb\": rpc error: code = NotFound desc = could not find container \"5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb\": container with ID starting with 5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb not found: ID does not exist" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.819704 4802 scope.go:117] "RemoveContainer" containerID="c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.820092 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105"} err="failed to get container status \"c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105\": rpc error: code = NotFound desc = could not find container \"c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105\": container with ID starting with c674e3fc46039b5c174d9695a5148c884c51d93bc667a85b3dd84a870168e105 not found: ID does not exist" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.820139 4802 scope.go:117] "RemoveContainer" containerID="5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.820527 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb"} err="failed to get container status \"5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb\": rpc error: code = NotFound desc = could not find container \"5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb\": container with ID starting with 5dddd8a198a2047af9095e3466a04ca25cde38a649e00a3eeae7bd544ba37abb not found: ID does not exist" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.821402 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c048749-82ff-401c-9d6b-2011b603fc8b" (UID: "8c048749-82ff-401c-9d6b-2011b603fc8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.845023 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-config-data" (OuterVolumeSpecName: "config-data") pod "8c048749-82ff-401c-9d6b-2011b603fc8b" (UID: "8c048749-82ff-401c-9d6b-2011b603fc8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.886556 4802 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.886613 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.886626 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.886655 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p6fs\" (UniqueName: \"kubernetes.io/projected/8c048749-82ff-401c-9d6b-2011b603fc8b-kube-api-access-4p6fs\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:49 crc kubenswrapper[4802]: I1004 05:42:49.886670 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c048749-82ff-401c-9d6b-2011b603fc8b-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.090670 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.120732 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.164194 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Oct 04 05:42:50 crc kubenswrapper[4802]: E1004 05:42:50.172782 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c048749-82ff-401c-9d6b-2011b603fc8b" containerName="manila-api" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.172823 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c048749-82ff-401c-9d6b-2011b603fc8b" containerName="manila-api" Oct 04 05:42:50 crc kubenswrapper[4802]: E1004 05:42:50.172911 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c048749-82ff-401c-9d6b-2011b603fc8b" containerName="manila-api-log" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.172920 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c048749-82ff-401c-9d6b-2011b603fc8b" containerName="manila-api-log" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.173214 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c048749-82ff-401c-9d6b-2011b603fc8b" containerName="manila-api" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.173236 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c048749-82ff-401c-9d6b-2011b603fc8b" containerName="manila-api-log" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.174885 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.178835 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.180180 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.181409 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.182420 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.295895 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv6qq\" (UniqueName: \"kubernetes.io/projected/de69abcf-8596-4354-8150-6469791192cd-kube-api-access-wv6qq\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.295937 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-internal-tls-certs\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.295964 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-config-data-custom\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.295996 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de69abcf-8596-4354-8150-6469791192cd-etc-machine-id\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.296015 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de69abcf-8596-4354-8150-6469791192cd-logs\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.296092 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-public-tls-certs\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.296111 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-config-data\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.296209 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.296300 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-scripts\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.372462 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c048749-82ff-401c-9d6b-2011b603fc8b" path="/var/lib/kubelet/pods/8c048749-82ff-401c-9d6b-2011b603fc8b/volumes" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.423003 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-public-tls-certs\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.423050 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-config-data\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.423129 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.423174 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-scripts\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.423195 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv6qq\" (UniqueName: \"kubernetes.io/projected/de69abcf-8596-4354-8150-6469791192cd-kube-api-access-wv6qq\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.423215 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-internal-tls-certs\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.423587 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-config-data-custom\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.423701 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de69abcf-8596-4354-8150-6469791192cd-etc-machine-id\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.423776 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de69abcf-8596-4354-8150-6469791192cd-etc-machine-id\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.423872 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de69abcf-8596-4354-8150-6469791192cd-logs\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.424236 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de69abcf-8596-4354-8150-6469791192cd-logs\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.430252 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-internal-tls-certs\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.430498 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-config-data\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.432352 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-public-tls-certs\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.445179 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv6qq\" (UniqueName: \"kubernetes.io/projected/de69abcf-8596-4354-8150-6469791192cd-kube-api-access-wv6qq\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.451954 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.452460 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-config-data-custom\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.460573 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de69abcf-8596-4354-8150-6469791192cd-scripts\") pod \"manila-api-0\" (UID: \"de69abcf-8596-4354-8150-6469791192cd\") " pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.467208 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.467545 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="ceilometer-central-agent" containerID="cri-o://af3b855928a572db5be9839bf956bf00b1cb6fc8668a0b8b0b6234eeb1f2a46b" gracePeriod=30 Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.468361 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="proxy-httpd" containerID="cri-o://6aa8a94b874f87a8ea185034fbc2734075835a6ea8619d15aadd543926e2b51f" gracePeriod=30 Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.468498 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="sg-core" containerID="cri-o://cd0466efba574b7a9f193bcd50e411be3b3aeb77525ab908b60a2ae2db0287ac" gracePeriod=30 Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.468539 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="ceilometer-notification-agent" containerID="cri-o://84082deb857cde95143808e65f3fcda348947c0c62247da740c9204206352a2f" gracePeriod=30 Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.501060 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.773116 4802 generic.go:334] "Generic (PLEG): container finished" podID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerID="6aa8a94b874f87a8ea185034fbc2734075835a6ea8619d15aadd543926e2b51f" exitCode=0 Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.773488 4802 generic.go:334] "Generic (PLEG): container finished" podID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerID="cd0466efba574b7a9f193bcd50e411be3b3aeb77525ab908b60a2ae2db0287ac" exitCode=2 Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.773171 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edd0a556-bfd5-46dc-aa86-63cfc060baf6","Type":"ContainerDied","Data":"6aa8a94b874f87a8ea185034fbc2734075835a6ea8619d15aadd543926e2b51f"} Oct 04 05:42:50 crc kubenswrapper[4802]: I1004 05:42:50.773605 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edd0a556-bfd5-46dc-aa86-63cfc060baf6","Type":"ContainerDied","Data":"cd0466efba574b7a9f193bcd50e411be3b3aeb77525ab908b60a2ae2db0287ac"} Oct 04 05:42:51 crc kubenswrapper[4802]: I1004 05:42:51.789442 4802 generic.go:334] "Generic (PLEG): container finished" podID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerID="af3b855928a572db5be9839bf956bf00b1cb6fc8668a0b8b0b6234eeb1f2a46b" exitCode=0 Oct 04 05:42:51 crc kubenswrapper[4802]: I1004 05:42:51.789602 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edd0a556-bfd5-46dc-aa86-63cfc060baf6","Type":"ContainerDied","Data":"af3b855928a572db5be9839bf956bf00b1cb6fc8668a0b8b0b6234eeb1f2a46b"} Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.705201 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.842237 4802 generic.go:334] "Generic (PLEG): container finished" podID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerID="84082deb857cde95143808e65f3fcda348947c0c62247da740c9204206352a2f" exitCode=0 Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.842545 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edd0a556-bfd5-46dc-aa86-63cfc060baf6","Type":"ContainerDied","Data":"84082deb857cde95143808e65f3fcda348947c0c62247da740c9204206352a2f"} Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.842572 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edd0a556-bfd5-46dc-aa86-63cfc060baf6","Type":"ContainerDied","Data":"4bca9117e9cda43d8f9f60e9393cd67b99ecf8680a1914b56ca0ef1ef4701a35"} Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.842588 4802 scope.go:117] "RemoveContainer" containerID="6aa8a94b874f87a8ea185034fbc2734075835a6ea8619d15aadd543926e2b51f" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.842763 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.892786 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edd0a556-bfd5-46dc-aa86-63cfc060baf6-log-httpd\") pod \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.892870 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f2ht\" (UniqueName: \"kubernetes.io/projected/edd0a556-bfd5-46dc-aa86-63cfc060baf6-kube-api-access-5f2ht\") pod \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.893033 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-ceilometer-tls-certs\") pod \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.893094 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-scripts\") pod \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.893143 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-config-data\") pod \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.893162 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-sg-core-conf-yaml\") pod \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.893184 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-combined-ca-bundle\") pod \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.893268 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edd0a556-bfd5-46dc-aa86-63cfc060baf6-run-httpd\") pod \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\" (UID: \"edd0a556-bfd5-46dc-aa86-63cfc060baf6\") " Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.894171 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edd0a556-bfd5-46dc-aa86-63cfc060baf6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "edd0a556-bfd5-46dc-aa86-63cfc060baf6" (UID: "edd0a556-bfd5-46dc-aa86-63cfc060baf6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.896207 4802 scope.go:117] "RemoveContainer" containerID="cd0466efba574b7a9f193bcd50e411be3b3aeb77525ab908b60a2ae2db0287ac" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.897153 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edd0a556-bfd5-46dc-aa86-63cfc060baf6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "edd0a556-bfd5-46dc-aa86-63cfc060baf6" (UID: "edd0a556-bfd5-46dc-aa86-63cfc060baf6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.920861 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-scripts" (OuterVolumeSpecName: "scripts") pod "edd0a556-bfd5-46dc-aa86-63cfc060baf6" (UID: "edd0a556-bfd5-46dc-aa86-63cfc060baf6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.929786 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd0a556-bfd5-46dc-aa86-63cfc060baf6-kube-api-access-5f2ht" (OuterVolumeSpecName: "kube-api-access-5f2ht") pod "edd0a556-bfd5-46dc-aa86-63cfc060baf6" (UID: "edd0a556-bfd5-46dc-aa86-63cfc060baf6"). InnerVolumeSpecName "kube-api-access-5f2ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.933904 4802 scope.go:117] "RemoveContainer" containerID="84082deb857cde95143808e65f3fcda348947c0c62247da740c9204206352a2f" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.956408 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "edd0a556-bfd5-46dc-aa86-63cfc060baf6" (UID: "edd0a556-bfd5-46dc-aa86-63cfc060baf6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.960249 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.987095 4802 scope.go:117] "RemoveContainer" containerID="af3b855928a572db5be9839bf956bf00b1cb6fc8668a0b8b0b6234eeb1f2a46b" Oct 04 05:42:53 crc kubenswrapper[4802]: W1004 05:42:53.994685 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde69abcf_8596_4354_8150_6469791192cd.slice/crio-b89a983deeb261debe010505430804d584570c06584531b2879b6af69f72d688 WatchSource:0}: Error finding container b89a983deeb261debe010505430804d584570c06584531b2879b6af69f72d688: Status 404 returned error can't find the container with id b89a983deeb261debe010505430804d584570c06584531b2879b6af69f72d688 Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.995461 4802 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.995720 4802 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edd0a556-bfd5-46dc-aa86-63cfc060baf6-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.995750 4802 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edd0a556-bfd5-46dc-aa86-63cfc060baf6-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.995762 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f2ht\" (UniqueName: \"kubernetes.io/projected/edd0a556-bfd5-46dc-aa86-63cfc060baf6-kube-api-access-5f2ht\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:53 crc kubenswrapper[4802]: I1004 05:42:53.995777 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.007939 4802 scope.go:117] "RemoveContainer" containerID="6aa8a94b874f87a8ea185034fbc2734075835a6ea8619d15aadd543926e2b51f" Oct 04 05:42:54 crc kubenswrapper[4802]: E1004 05:42:54.009681 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aa8a94b874f87a8ea185034fbc2734075835a6ea8619d15aadd543926e2b51f\": container with ID starting with 6aa8a94b874f87a8ea185034fbc2734075835a6ea8619d15aadd543926e2b51f not found: ID does not exist" containerID="6aa8a94b874f87a8ea185034fbc2734075835a6ea8619d15aadd543926e2b51f" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.009719 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa8a94b874f87a8ea185034fbc2734075835a6ea8619d15aadd543926e2b51f"} err="failed to get container status \"6aa8a94b874f87a8ea185034fbc2734075835a6ea8619d15aadd543926e2b51f\": rpc error: code = NotFound desc = could not find container \"6aa8a94b874f87a8ea185034fbc2734075835a6ea8619d15aadd543926e2b51f\": container with ID starting with 6aa8a94b874f87a8ea185034fbc2734075835a6ea8619d15aadd543926e2b51f not found: ID does not exist" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.009843 4802 scope.go:117] "RemoveContainer" containerID="cd0466efba574b7a9f193bcd50e411be3b3aeb77525ab908b60a2ae2db0287ac" Oct 04 05:42:54 crc kubenswrapper[4802]: E1004 05:42:54.011823 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0466efba574b7a9f193bcd50e411be3b3aeb77525ab908b60a2ae2db0287ac\": container with ID starting with cd0466efba574b7a9f193bcd50e411be3b3aeb77525ab908b60a2ae2db0287ac not found: ID does not exist" containerID="cd0466efba574b7a9f193bcd50e411be3b3aeb77525ab908b60a2ae2db0287ac" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.011939 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0466efba574b7a9f193bcd50e411be3b3aeb77525ab908b60a2ae2db0287ac"} err="failed to get container status \"cd0466efba574b7a9f193bcd50e411be3b3aeb77525ab908b60a2ae2db0287ac\": rpc error: code = NotFound desc = could not find container \"cd0466efba574b7a9f193bcd50e411be3b3aeb77525ab908b60a2ae2db0287ac\": container with ID starting with cd0466efba574b7a9f193bcd50e411be3b3aeb77525ab908b60a2ae2db0287ac not found: ID does not exist" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.012067 4802 scope.go:117] "RemoveContainer" containerID="84082deb857cde95143808e65f3fcda348947c0c62247da740c9204206352a2f" Oct 04 05:42:54 crc kubenswrapper[4802]: E1004 05:42:54.012499 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84082deb857cde95143808e65f3fcda348947c0c62247da740c9204206352a2f\": container with ID starting with 84082deb857cde95143808e65f3fcda348947c0c62247da740c9204206352a2f not found: ID does not exist" containerID="84082deb857cde95143808e65f3fcda348947c0c62247da740c9204206352a2f" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.012537 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84082deb857cde95143808e65f3fcda348947c0c62247da740c9204206352a2f"} err="failed to get container status \"84082deb857cde95143808e65f3fcda348947c0c62247da740c9204206352a2f\": rpc error: code = NotFound desc = could not find container \"84082deb857cde95143808e65f3fcda348947c0c62247da740c9204206352a2f\": container with ID starting with 84082deb857cde95143808e65f3fcda348947c0c62247da740c9204206352a2f not found: ID does not exist" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.012563 4802 scope.go:117] "RemoveContainer" containerID="af3b855928a572db5be9839bf956bf00b1cb6fc8668a0b8b0b6234eeb1f2a46b" Oct 04 05:42:54 crc kubenswrapper[4802]: E1004 05:42:54.012797 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af3b855928a572db5be9839bf956bf00b1cb6fc8668a0b8b0b6234eeb1f2a46b\": container with ID starting with af3b855928a572db5be9839bf956bf00b1cb6fc8668a0b8b0b6234eeb1f2a46b not found: ID does not exist" containerID="af3b855928a572db5be9839bf956bf00b1cb6fc8668a0b8b0b6234eeb1f2a46b" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.012833 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af3b855928a572db5be9839bf956bf00b1cb6fc8668a0b8b0b6234eeb1f2a46b"} err="failed to get container status \"af3b855928a572db5be9839bf956bf00b1cb6fc8668a0b8b0b6234eeb1f2a46b\": rpc error: code = NotFound desc = could not find container \"af3b855928a572db5be9839bf956bf00b1cb6fc8668a0b8b0b6234eeb1f2a46b\": container with ID starting with af3b855928a572db5be9839bf956bf00b1cb6fc8668a0b8b0b6234eeb1f2a46b not found: ID does not exist" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.037075 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edd0a556-bfd5-46dc-aa86-63cfc060baf6" (UID: "edd0a556-bfd5-46dc-aa86-63cfc060baf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.059385 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "edd0a556-bfd5-46dc-aa86-63cfc060baf6" (UID: "edd0a556-bfd5-46dc-aa86-63cfc060baf6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.065331 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-config-data" (OuterVolumeSpecName: "config-data") pod "edd0a556-bfd5-46dc-aa86-63cfc060baf6" (UID: "edd0a556-bfd5-46dc-aa86-63cfc060baf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.098157 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.099615 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.099907 4802 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd0a556-bfd5-46dc-aa86-63cfc060baf6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.178242 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.231631 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.254986 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:42:54 crc kubenswrapper[4802]: E1004 05:42:54.255438 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="sg-core" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.255459 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="sg-core" Oct 04 05:42:54 crc kubenswrapper[4802]: E1004 05:42:54.255476 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="ceilometer-central-agent" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.255482 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="ceilometer-central-agent" Oct 04 05:42:54 crc kubenswrapper[4802]: E1004 05:42:54.255495 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="ceilometer-notification-agent" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.255501 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="ceilometer-notification-agent" Oct 04 05:42:54 crc kubenswrapper[4802]: E1004 05:42:54.255531 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="proxy-httpd" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.255537 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="proxy-httpd" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.255725 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="ceilometer-central-agent" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.255739 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="proxy-httpd" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.255752 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="ceilometer-notification-agent" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.257698 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" containerName="sg-core" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.260900 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.272254 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.274504 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.274765 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.274783 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.275850 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.381998 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd0a556-bfd5-46dc-aa86-63cfc060baf6" path="/var/lib/kubelet/pods/edd0a556-bfd5-46dc-aa86-63cfc060baf6/volumes" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.415268 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51971646-db5a-4718-b63b-54664766212f-run-httpd\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.416029 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51971646-db5a-4718-b63b-54664766212f-log-httpd\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.416108 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.416164 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.416197 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88jpn\" (UniqueName: \"kubernetes.io/projected/51971646-db5a-4718-b63b-54664766212f-kube-api-access-88jpn\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.416390 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-config-data\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.416824 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-scripts\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.416890 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.503913 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-9fvn9" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.518561 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-scripts\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.518659 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.518723 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51971646-db5a-4718-b63b-54664766212f-run-httpd\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.518772 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51971646-db5a-4718-b63b-54664766212f-log-httpd\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.519159 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51971646-db5a-4718-b63b-54664766212f-run-httpd\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.519245 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.519299 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.519310 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51971646-db5a-4718-b63b-54664766212f-log-httpd\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.519335 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88jpn\" (UniqueName: \"kubernetes.io/projected/51971646-db5a-4718-b63b-54664766212f-kube-api-access-88jpn\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.519424 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-config-data\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.522037 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-scripts\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.522414 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.522729 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.523574 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.537055 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-config-data\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.544897 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88jpn\" (UniqueName: \"kubernetes.io/projected/51971646-db5a-4718-b63b-54664766212f-kube-api-access-88jpn\") pod \"ceilometer-0\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.597357 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-z5ff7"] Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.609003 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" podUID="f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" containerName="dnsmasq-dns" containerID="cri-o://b0b6ca717b84602d11fcbbe95b20f4d326049c6ad49450d83963d5e2c55dfb8c" gracePeriod=10 Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.625266 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.881772 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0ab39419-3804-460a-942c-7236a8d50aae","Type":"ContainerStarted","Data":"f0917396e2f45ff2f1b0d2315aa7d237af8bdc1db8e850b6992ddab81eee2b9a"} Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.885015 4802 generic.go:334] "Generic (PLEG): container finished" podID="f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" containerID="b0b6ca717b84602d11fcbbe95b20f4d326049c6ad49450d83963d5e2c55dfb8c" exitCode=0 Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.885120 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" event={"ID":"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1","Type":"ContainerDied","Data":"b0b6ca717b84602d11fcbbe95b20f4d326049c6ad49450d83963d5e2c55dfb8c"} Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.887760 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"de69abcf-8596-4354-8150-6469791192cd","Type":"ContainerStarted","Data":"750ff3faf0c152162e3ab865387ad94d75acbc5f3b973ba5f7198903dad9e456"} Oct 04 05:42:54 crc kubenswrapper[4802]: I1004 05:42:54.887785 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"de69abcf-8596-4354-8150-6469791192cd","Type":"ContainerStarted","Data":"b89a983deeb261debe010505430804d584570c06584531b2879b6af69f72d688"} Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.138380 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.825138857 podStartE2EDuration="11.138360155s" podCreationTimestamp="2025-10-04 05:42:44 +0000 UTC" firstStartedPulling="2025-10-04 05:42:45.017383345 +0000 UTC m=+3407.425383970" lastFinishedPulling="2025-10-04 05:42:53.330604643 +0000 UTC m=+3415.738605268" observedRunningTime="2025-10-04 05:42:54.911039069 +0000 UTC m=+3417.319039694" watchObservedRunningTime="2025-10-04 05:42:55.138360155 +0000 UTC m=+3417.546360780" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.149089 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.160196 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.340984 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-dns-svc\") pod \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.341314 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-ovsdbserver-sb\") pod \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.341377 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-openstack-edpm-ipam\") pod \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.341423 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-ovsdbserver-nb\") pod \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.341510 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4x66\" (UniqueName: \"kubernetes.io/projected/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-kube-api-access-j4x66\") pod \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.341535 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-config\") pod \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\" (UID: \"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1\") " Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.353876 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-kube-api-access-j4x66" (OuterVolumeSpecName: "kube-api-access-j4x66") pod "f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" (UID: "f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1"). InnerVolumeSpecName "kube-api-access-j4x66". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.442656 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" (UID: "f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.443751 4802 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.443895 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4x66\" (UniqueName: \"kubernetes.io/projected/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-kube-api-access-j4x66\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.466039 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-config" (OuterVolumeSpecName: "config") pod "f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" (UID: "f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.494851 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" (UID: "f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.495255 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" (UID: "f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.546009 4802 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-config\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.546809 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.546960 4802 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.561298 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" (UID: "f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.648428 4802 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.900042 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.900256 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-z5ff7" event={"ID":"f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1","Type":"ContainerDied","Data":"acd7b6637e718d466f8aef662630999ca52c9893cd997275992e803bbc8145b5"} Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.900303 4802 scope.go:117] "RemoveContainer" containerID="b0b6ca717b84602d11fcbbe95b20f4d326049c6ad49450d83963d5e2c55dfb8c" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.903879 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"de69abcf-8596-4354-8150-6469791192cd","Type":"ContainerStarted","Data":"fc0c3f383cd6c49e9b2f2e9b63dfb2a82f1e945edf598bd89fafe5f1ea384536"} Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.904714 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.910202 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51971646-db5a-4718-b63b-54664766212f","Type":"ContainerStarted","Data":"fad8af054741d32bde0cf3b72740cb8a559e4bcebe42daf6861287b969b1825b"} Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.912979 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0ab39419-3804-460a-942c-7236a8d50aae","Type":"ContainerStarted","Data":"8db3b4ac27fd6f7f4579e921c4fcec2a9b6bdc5f95985f573fad1a8ec4bb7d89"} Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.933710 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=5.933688806 podStartE2EDuration="5.933688806s" podCreationTimestamp="2025-10-04 05:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:42:55.931197506 +0000 UTC m=+3418.339198141" watchObservedRunningTime="2025-10-04 05:42:55.933688806 +0000 UTC m=+3418.341689431" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.963711 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-z5ff7"] Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.968838 4802 scope.go:117] "RemoveContainer" containerID="c70256d2de4599839b9bc55703e41b6def2adcd267d6efbe08e9d0281e87cf8b" Oct 04 05:42:55 crc kubenswrapper[4802]: I1004 05:42:55.971918 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-z5ff7"] Oct 04 05:42:56 crc kubenswrapper[4802]: I1004 05:42:56.371132 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" path="/var/lib/kubelet/pods/f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1/volumes" Oct 04 05:42:56 crc kubenswrapper[4802]: I1004 05:42:56.923110 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51971646-db5a-4718-b63b-54664766212f","Type":"ContainerStarted","Data":"e439229ded03471df54aac620e859506985d4be73cb7774957b7e2fd0b2efd61"} Oct 04 05:42:56 crc kubenswrapper[4802]: I1004 05:42:56.923433 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51971646-db5a-4718-b63b-54664766212f","Type":"ContainerStarted","Data":"0b1f06cba554be16de757f875b03d1d59c75f0eecfa654a327d224c51fd65f8f"} Oct 04 05:42:57 crc kubenswrapper[4802]: I1004 05:42:57.672329 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:42:59 crc kubenswrapper[4802]: I1004 05:42:59.359890 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:42:59 crc kubenswrapper[4802]: I1004 05:42:59.953613 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51971646-db5a-4718-b63b-54664766212f","Type":"ContainerStarted","Data":"df29cd39d8d15edbf29876da05dad671afd338a96459d0ac25c56321b0bf97e3"} Oct 04 05:42:59 crc kubenswrapper[4802]: I1004 05:42:59.955713 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"2460e6a2a76c8e7cf39651cc3e284f1c1f8c2f793fcb926c2145f17ff566d459"} Oct 04 05:43:01 crc kubenswrapper[4802]: I1004 05:43:01.977590 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51971646-db5a-4718-b63b-54664766212f","Type":"ContainerStarted","Data":"a474d09c1bc68fc7b232cb2d9b480f799dc5ea4aa8384c5eebe3773552bec504"} Oct 04 05:43:01 crc kubenswrapper[4802]: I1004 05:43:01.978225 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 04 05:43:01 crc kubenswrapper[4802]: I1004 05:43:01.977814 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="sg-core" containerID="cri-o://df29cd39d8d15edbf29876da05dad671afd338a96459d0ac25c56321b0bf97e3" gracePeriod=30 Oct 04 05:43:01 crc kubenswrapper[4802]: I1004 05:43:01.977755 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="ceilometer-central-agent" containerID="cri-o://0b1f06cba554be16de757f875b03d1d59c75f0eecfa654a327d224c51fd65f8f" gracePeriod=30 Oct 04 05:43:01 crc kubenswrapper[4802]: I1004 05:43:01.977822 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="ceilometer-notification-agent" containerID="cri-o://e439229ded03471df54aac620e859506985d4be73cb7774957b7e2fd0b2efd61" gracePeriod=30 Oct 04 05:43:01 crc kubenswrapper[4802]: I1004 05:43:01.977847 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="proxy-httpd" containerID="cri-o://a474d09c1bc68fc7b232cb2d9b480f799dc5ea4aa8384c5eebe3773552bec504" gracePeriod=30 Oct 04 05:43:02 crc kubenswrapper[4802]: I1004 05:43:02.008930 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.382037961 podStartE2EDuration="8.008912586s" podCreationTimestamp="2025-10-04 05:42:54 +0000 UTC" firstStartedPulling="2025-10-04 05:42:55.168348024 +0000 UTC m=+3417.576348649" lastFinishedPulling="2025-10-04 05:43:00.795222609 +0000 UTC m=+3423.203223274" observedRunningTime="2025-10-04 05:43:02.002787024 +0000 UTC m=+3424.410787669" watchObservedRunningTime="2025-10-04 05:43:02.008912586 +0000 UTC m=+3424.416913211" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.048263 4802 generic.go:334] "Generic (PLEG): container finished" podID="51971646-db5a-4718-b63b-54664766212f" containerID="a474d09c1bc68fc7b232cb2d9b480f799dc5ea4aa8384c5eebe3773552bec504" exitCode=0 Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.048574 4802 generic.go:334] "Generic (PLEG): container finished" podID="51971646-db5a-4718-b63b-54664766212f" containerID="df29cd39d8d15edbf29876da05dad671afd338a96459d0ac25c56321b0bf97e3" exitCode=2 Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.048582 4802 generic.go:334] "Generic (PLEG): container finished" podID="51971646-db5a-4718-b63b-54664766212f" containerID="e439229ded03471df54aac620e859506985d4be73cb7774957b7e2fd0b2efd61" exitCode=0 Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.048589 4802 generic.go:334] "Generic (PLEG): container finished" podID="51971646-db5a-4718-b63b-54664766212f" containerID="0b1f06cba554be16de757f875b03d1d59c75f0eecfa654a327d224c51fd65f8f" exitCode=0 Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.048330 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51971646-db5a-4718-b63b-54664766212f","Type":"ContainerDied","Data":"a474d09c1bc68fc7b232cb2d9b480f799dc5ea4aa8384c5eebe3773552bec504"} Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.048672 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51971646-db5a-4718-b63b-54664766212f","Type":"ContainerDied","Data":"df29cd39d8d15edbf29876da05dad671afd338a96459d0ac25c56321b0bf97e3"} Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.048688 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51971646-db5a-4718-b63b-54664766212f","Type":"ContainerDied","Data":"e439229ded03471df54aac620e859506985d4be73cb7774957b7e2fd0b2efd61"} Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.048703 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51971646-db5a-4718-b63b-54664766212f","Type":"ContainerDied","Data":"0b1f06cba554be16de757f875b03d1d59c75f0eecfa654a327d224c51fd65f8f"} Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.052573 4802 generic.go:334] "Generic (PLEG): container finished" podID="7638b318-b144-4dea-9a8c-6a694fce84a2" containerID="7f111214cc3791fe390db8367b7c6a624f5e41ae2460776e8f52f0ac67c748d2" exitCode=137 Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.052606 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdf6cb5fb-shljs" event={"ID":"7638b318-b144-4dea-9a8c-6a694fce84a2","Type":"ContainerDied","Data":"7f111214cc3791fe390db8367b7c6a624f5e41ae2460776e8f52f0ac67c748d2"} Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.390547 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.396063 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.433298 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51971646-db5a-4718-b63b-54664766212f-log-httpd\") pod \"51971646-db5a-4718-b63b-54664766212f\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.433332 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-horizon-secret-key\") pod \"7638b318-b144-4dea-9a8c-6a694fce84a2\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.433358 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88jpn\" (UniqueName: \"kubernetes.io/projected/51971646-db5a-4718-b63b-54664766212f-kube-api-access-88jpn\") pod \"51971646-db5a-4718-b63b-54664766212f\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.433376 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-sg-core-conf-yaml\") pod \"51971646-db5a-4718-b63b-54664766212f\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.433455 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7638b318-b144-4dea-9a8c-6a694fce84a2-config-data\") pod \"7638b318-b144-4dea-9a8c-6a694fce84a2\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.433482 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-ceilometer-tls-certs\") pod \"51971646-db5a-4718-b63b-54664766212f\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.433505 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7638b318-b144-4dea-9a8c-6a694fce84a2-scripts\") pod \"7638b318-b144-4dea-9a8c-6a694fce84a2\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.433523 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-horizon-tls-certs\") pod \"7638b318-b144-4dea-9a8c-6a694fce84a2\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.433548 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7638b318-b144-4dea-9a8c-6a694fce84a2-logs\") pod \"7638b318-b144-4dea-9a8c-6a694fce84a2\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.434038 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7638b318-b144-4dea-9a8c-6a694fce84a2-logs" (OuterVolumeSpecName: "logs") pod "7638b318-b144-4dea-9a8c-6a694fce84a2" (UID: "7638b318-b144-4dea-9a8c-6a694fce84a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.434311 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51971646-db5a-4718-b63b-54664766212f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "51971646-db5a-4718-b63b-54664766212f" (UID: "51971646-db5a-4718-b63b-54664766212f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.434394 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-config-data\") pod \"51971646-db5a-4718-b63b-54664766212f\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.434479 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-scripts\") pod \"51971646-db5a-4718-b63b-54664766212f\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.434628 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51971646-db5a-4718-b63b-54664766212f-run-httpd\") pod \"51971646-db5a-4718-b63b-54664766212f\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.434738 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-combined-ca-bundle\") pod \"7638b318-b144-4dea-9a8c-6a694fce84a2\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.434858 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spq46\" (UniqueName: \"kubernetes.io/projected/7638b318-b144-4dea-9a8c-6a694fce84a2-kube-api-access-spq46\") pod \"7638b318-b144-4dea-9a8c-6a694fce84a2\" (UID: \"7638b318-b144-4dea-9a8c-6a694fce84a2\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.434968 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-combined-ca-bundle\") pod \"51971646-db5a-4718-b63b-54664766212f\" (UID: \"51971646-db5a-4718-b63b-54664766212f\") " Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.435409 4802 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51971646-db5a-4718-b63b-54664766212f-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.435486 4802 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7638b318-b144-4dea-9a8c-6a694fce84a2-logs\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.442505 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51971646-db5a-4718-b63b-54664766212f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "51971646-db5a-4718-b63b-54664766212f" (UID: "51971646-db5a-4718-b63b-54664766212f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.442719 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7638b318-b144-4dea-9a8c-6a694fce84a2" (UID: "7638b318-b144-4dea-9a8c-6a694fce84a2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.468589 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51971646-db5a-4718-b63b-54664766212f-kube-api-access-88jpn" (OuterVolumeSpecName: "kube-api-access-88jpn") pod "51971646-db5a-4718-b63b-54664766212f" (UID: "51971646-db5a-4718-b63b-54664766212f"). InnerVolumeSpecName "kube-api-access-88jpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.468679 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7638b318-b144-4dea-9a8c-6a694fce84a2-kube-api-access-spq46" (OuterVolumeSpecName: "kube-api-access-spq46") pod "7638b318-b144-4dea-9a8c-6a694fce84a2" (UID: "7638b318-b144-4dea-9a8c-6a694fce84a2"). InnerVolumeSpecName "kube-api-access-spq46". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.469521 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-scripts" (OuterVolumeSpecName: "scripts") pod "51971646-db5a-4718-b63b-54664766212f" (UID: "51971646-db5a-4718-b63b-54664766212f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.504707 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7638b318-b144-4dea-9a8c-6a694fce84a2-config-data" (OuterVolumeSpecName: "config-data") pod "7638b318-b144-4dea-9a8c-6a694fce84a2" (UID: "7638b318-b144-4dea-9a8c-6a694fce84a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.508114 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7638b318-b144-4dea-9a8c-6a694fce84a2" (UID: "7638b318-b144-4dea-9a8c-6a694fce84a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.516000 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7638b318-b144-4dea-9a8c-6a694fce84a2-scripts" (OuterVolumeSpecName: "scripts") pod "7638b318-b144-4dea-9a8c-6a694fce84a2" (UID: "7638b318-b144-4dea-9a8c-6a694fce84a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.517351 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "51971646-db5a-4718-b63b-54664766212f" (UID: "51971646-db5a-4718-b63b-54664766212f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.531969 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "51971646-db5a-4718-b63b-54664766212f" (UID: "51971646-db5a-4718-b63b-54664766212f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.537542 4802 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51971646-db5a-4718-b63b-54664766212f-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.537567 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.537582 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spq46\" (UniqueName: \"kubernetes.io/projected/7638b318-b144-4dea-9a8c-6a694fce84a2-kube-api-access-spq46\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.537595 4802 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.537607 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88jpn\" (UniqueName: \"kubernetes.io/projected/51971646-db5a-4718-b63b-54664766212f-kube-api-access-88jpn\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.537621 4802 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.537632 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7638b318-b144-4dea-9a8c-6a694fce84a2-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.537695 4802 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.537709 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7638b318-b144-4dea-9a8c-6a694fce84a2-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.537720 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.548403 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "7638b318-b144-4dea-9a8c-6a694fce84a2" (UID: "7638b318-b144-4dea-9a8c-6a694fce84a2"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.568486 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51971646-db5a-4718-b63b-54664766212f" (UID: "51971646-db5a-4718-b63b-54664766212f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.599711 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-config-data" (OuterVolumeSpecName: "config-data") pod "51971646-db5a-4718-b63b-54664766212f" (UID: "51971646-db5a-4718-b63b-54664766212f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.639383 4802 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7638b318-b144-4dea-9a8c-6a694fce84a2-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.639411 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:03 crc kubenswrapper[4802]: I1004 05:43:03.639420 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51971646-db5a-4718-b63b-54664766212f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.062628 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51971646-db5a-4718-b63b-54664766212f","Type":"ContainerDied","Data":"fad8af054741d32bde0cf3b72740cb8a559e4bcebe42daf6861287b969b1825b"} Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.062969 4802 scope.go:117] "RemoveContainer" containerID="a474d09c1bc68fc7b232cb2d9b480f799dc5ea4aa8384c5eebe3773552bec504" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.062692 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.065745 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fdf6cb5fb-shljs" event={"ID":"7638b318-b144-4dea-9a8c-6a694fce84a2","Type":"ContainerDied","Data":"d8b48e98bd544c77adaa7d112a4e9d47102f213d7cc2fd8ea7554bda92504420"} Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.065788 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fdf6cb5fb-shljs" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.095060 4802 scope.go:117] "RemoveContainer" containerID="df29cd39d8d15edbf29876da05dad671afd338a96459d0ac25c56321b0bf97e3" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.131156 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.138683 4802 scope.go:117] "RemoveContainer" containerID="e439229ded03471df54aac620e859506985d4be73cb7774957b7e2fd0b2efd61" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.156626 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.164794 4802 scope.go:117] "RemoveContainer" containerID="0b1f06cba554be16de757f875b03d1d59c75f0eecfa654a327d224c51fd65f8f" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.167870 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fdf6cb5fb-shljs"] Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.182938 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5fdf6cb5fb-shljs"] Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.190603 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:43:04 crc kubenswrapper[4802]: E1004 05:43:04.191106 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" containerName="dnsmasq-dns" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.191225 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" containerName="dnsmasq-dns" Oct 04 05:43:04 crc kubenswrapper[4802]: E1004 05:43:04.191319 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="sg-core" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.191378 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="sg-core" Oct 04 05:43:04 crc kubenswrapper[4802]: E1004 05:43:04.191476 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="proxy-httpd" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.191556 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="proxy-httpd" Oct 04 05:43:04 crc kubenswrapper[4802]: E1004 05:43:04.191627 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="ceilometer-notification-agent" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.191771 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="ceilometer-notification-agent" Oct 04 05:43:04 crc kubenswrapper[4802]: E1004 05:43:04.191877 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7638b318-b144-4dea-9a8c-6a694fce84a2" containerName="horizon" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.191969 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="7638b318-b144-4dea-9a8c-6a694fce84a2" containerName="horizon" Oct 04 05:43:04 crc kubenswrapper[4802]: E1004 05:43:04.192112 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="ceilometer-central-agent" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.192246 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="ceilometer-central-agent" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.192320 4802 scope.go:117] "RemoveContainer" containerID="c8ce3eb4eb6868d5edeb2b57be9e9781f9583274c26a5d874c01a4b346236639" Oct 04 05:43:04 crc kubenswrapper[4802]: E1004 05:43:04.192335 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" containerName="init" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.192500 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" containerName="init" Oct 04 05:43:04 crc kubenswrapper[4802]: E1004 05:43:04.192572 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7638b318-b144-4dea-9a8c-6a694fce84a2" containerName="horizon-log" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.192857 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="7638b318-b144-4dea-9a8c-6a694fce84a2" containerName="horizon-log" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.193216 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="7638b318-b144-4dea-9a8c-6a694fce84a2" containerName="horizon" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.193367 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="sg-core" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.193463 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="ceilometer-notification-agent" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.193609 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="proxy-httpd" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.194293 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="51971646-db5a-4718-b63b-54664766212f" containerName="ceilometer-central-agent" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.194384 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="7638b318-b144-4dea-9a8c-6a694fce84a2" containerName="horizon-log" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.194495 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21a5f3f-81a3-4ffa-a729-fc4a65aaeaf1" containerName="dnsmasq-dns" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.196779 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.199685 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.199821 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.199696 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.202539 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.259171 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-scripts\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.259491 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.259590 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81d6574-95c4-4583-893a-87f8a22d6162-log-httpd\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.259707 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-config-data\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.259883 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pwgb\" (UniqueName: \"kubernetes.io/projected/f81d6574-95c4-4583-893a-87f8a22d6162-kube-api-access-9pwgb\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.259920 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81d6574-95c4-4583-893a-87f8a22d6162-run-httpd\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.259940 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.259964 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.361236 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-scripts\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.361307 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.361335 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81d6574-95c4-4583-893a-87f8a22d6162-log-httpd\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.361367 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-config-data\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.361472 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pwgb\" (UniqueName: \"kubernetes.io/projected/f81d6574-95c4-4583-893a-87f8a22d6162-kube-api-access-9pwgb\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.361506 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81d6574-95c4-4583-893a-87f8a22d6162-run-httpd\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.361535 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.361566 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.361889 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81d6574-95c4-4583-893a-87f8a22d6162-log-httpd\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.362527 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81d6574-95c4-4583-893a-87f8a22d6162-run-httpd\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.366993 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-scripts\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.368109 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.368631 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.368896 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-config-data\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.369978 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f81d6574-95c4-4583-893a-87f8a22d6162-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.378527 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51971646-db5a-4718-b63b-54664766212f" path="/var/lib/kubelet/pods/51971646-db5a-4718-b63b-54664766212f/volumes" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.379586 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7638b318-b144-4dea-9a8c-6a694fce84a2" path="/var/lib/kubelet/pods/7638b318-b144-4dea-9a8c-6a694fce84a2/volumes" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.381345 4802 scope.go:117] "RemoveContainer" containerID="7f111214cc3791fe390db8367b7c6a624f5e41ae2460776e8f52f0ac67c748d2" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.382367 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pwgb\" (UniqueName: \"kubernetes.io/projected/f81d6574-95c4-4583-893a-87f8a22d6162-kube-api-access-9pwgb\") pod \"ceilometer-0\" (UID: \"f81d6574-95c4-4583-893a-87f8a22d6162\") " pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.395182 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.517851 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 04 05:43:04 crc kubenswrapper[4802]: W1004 05:43:04.992347 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf81d6574_95c4_4583_893a_87f8a22d6162.slice/crio-c46e9773f749c91b07102605acf64bdcd05f15ecbc9d0cee107ff9a25c65cf06 WatchSource:0}: Error finding container c46e9773f749c91b07102605acf64bdcd05f15ecbc9d0cee107ff9a25c65cf06: Status 404 returned error can't find the container with id c46e9773f749c91b07102605acf64bdcd05f15ecbc9d0cee107ff9a25c65cf06 Oct 04 05:43:04 crc kubenswrapper[4802]: I1004 05:43:04.999268 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 04 05:43:05 crc kubenswrapper[4802]: I1004 05:43:05.075820 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81d6574-95c4-4583-893a-87f8a22d6162","Type":"ContainerStarted","Data":"c46e9773f749c91b07102605acf64bdcd05f15ecbc9d0cee107ff9a25c65cf06"} Oct 04 05:43:05 crc kubenswrapper[4802]: I1004 05:43:05.876937 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Oct 04 05:43:05 crc kubenswrapper[4802]: I1004 05:43:05.998385 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Oct 04 05:43:06 crc kubenswrapper[4802]: I1004 05:43:06.005073 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Oct 04 05:43:06 crc kubenswrapper[4802]: I1004 05:43:06.059620 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Oct 04 05:43:06 crc kubenswrapper[4802]: I1004 05:43:06.104220 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="0ab39419-3804-460a-942c-7236a8d50aae" containerName="manila-share" containerID="cri-o://f0917396e2f45ff2f1b0d2315aa7d237af8bdc1db8e850b6992ddab81eee2b9a" gracePeriod=30 Oct 04 05:43:06 crc kubenswrapper[4802]: I1004 05:43:06.104339 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="0ab39419-3804-460a-942c-7236a8d50aae" containerName="probe" containerID="cri-o://8db3b4ac27fd6f7f4579e921c4fcec2a9b6bdc5f95985f573fad1a8ec4bb7d89" gracePeriod=30 Oct 04 05:43:06 crc kubenswrapper[4802]: I1004 05:43:06.104353 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="15473ce1-4cda-4edf-83f5-56bba2dbcf0c" containerName="manila-scheduler" containerID="cri-o://069bec6400f2cf051c410edc1aaf930847a36235dff7a63a36443af5e686db58" gracePeriod=30 Oct 04 05:43:06 crc kubenswrapper[4802]: I1004 05:43:06.104509 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="15473ce1-4cda-4edf-83f5-56bba2dbcf0c" containerName="probe" containerID="cri-o://04d4cced873ecb5597441d2d4a28b5123c16ee6ae9ac15eed0841ef6ff7e164d" gracePeriod=30 Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.114893 4802 generic.go:334] "Generic (PLEG): container finished" podID="0ab39419-3804-460a-942c-7236a8d50aae" containerID="8db3b4ac27fd6f7f4579e921c4fcec2a9b6bdc5f95985f573fad1a8ec4bb7d89" exitCode=0 Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.116391 4802 generic.go:334] "Generic (PLEG): container finished" podID="0ab39419-3804-460a-942c-7236a8d50aae" containerID="f0917396e2f45ff2f1b0d2315aa7d237af8bdc1db8e850b6992ddab81eee2b9a" exitCode=1 Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.114967 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0ab39419-3804-460a-942c-7236a8d50aae","Type":"ContainerDied","Data":"8db3b4ac27fd6f7f4579e921c4fcec2a9b6bdc5f95985f573fad1a8ec4bb7d89"} Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.116669 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0ab39419-3804-460a-942c-7236a8d50aae","Type":"ContainerDied","Data":"f0917396e2f45ff2f1b0d2315aa7d237af8bdc1db8e850b6992ddab81eee2b9a"} Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.118983 4802 generic.go:334] "Generic (PLEG): container finished" podID="15473ce1-4cda-4edf-83f5-56bba2dbcf0c" containerID="04d4cced873ecb5597441d2d4a28b5123c16ee6ae9ac15eed0841ef6ff7e164d" exitCode=0 Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.119007 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"15473ce1-4cda-4edf-83f5-56bba2dbcf0c","Type":"ContainerDied","Data":"04d4cced873ecb5597441d2d4a28b5123c16ee6ae9ac15eed0841ef6ff7e164d"} Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.720600 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.732024 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ab39419-3804-460a-942c-7236a8d50aae-etc-machine-id\") pod \"0ab39419-3804-460a-942c-7236a8d50aae\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.732086 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-combined-ca-bundle\") pod \"0ab39419-3804-460a-942c-7236a8d50aae\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.732197 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-config-data-custom\") pod \"0ab39419-3804-460a-942c-7236a8d50aae\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.732227 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ab39419-3804-460a-942c-7236a8d50aae-ceph\") pod \"0ab39419-3804-460a-942c-7236a8d50aae\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.732278 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-config-data\") pod \"0ab39419-3804-460a-942c-7236a8d50aae\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.732335 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0ab39419-3804-460a-942c-7236a8d50aae-var-lib-manila\") pod \"0ab39419-3804-460a-942c-7236a8d50aae\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.732359 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-scripts\") pod \"0ab39419-3804-460a-942c-7236a8d50aae\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.732404 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9wdk\" (UniqueName: \"kubernetes.io/projected/0ab39419-3804-460a-942c-7236a8d50aae-kube-api-access-b9wdk\") pod \"0ab39419-3804-460a-942c-7236a8d50aae\" (UID: \"0ab39419-3804-460a-942c-7236a8d50aae\") " Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.734198 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ab39419-3804-460a-942c-7236a8d50aae-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0ab39419-3804-460a-942c-7236a8d50aae" (UID: "0ab39419-3804-460a-942c-7236a8d50aae"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.734415 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ab39419-3804-460a-942c-7236a8d50aae-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "0ab39419-3804-460a-942c-7236a8d50aae" (UID: "0ab39419-3804-460a-942c-7236a8d50aae"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.738545 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0ab39419-3804-460a-942c-7236a8d50aae" (UID: "0ab39419-3804-460a-942c-7236a8d50aae"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.738731 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab39419-3804-460a-942c-7236a8d50aae-ceph" (OuterVolumeSpecName: "ceph") pod "0ab39419-3804-460a-942c-7236a8d50aae" (UID: "0ab39419-3804-460a-942c-7236a8d50aae"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.743915 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-scripts" (OuterVolumeSpecName: "scripts") pod "0ab39419-3804-460a-942c-7236a8d50aae" (UID: "0ab39419-3804-460a-942c-7236a8d50aae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.744925 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab39419-3804-460a-942c-7236a8d50aae-kube-api-access-b9wdk" (OuterVolumeSpecName: "kube-api-access-b9wdk") pod "0ab39419-3804-460a-942c-7236a8d50aae" (UID: "0ab39419-3804-460a-942c-7236a8d50aae"). InnerVolumeSpecName "kube-api-access-b9wdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.819866 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ab39419-3804-460a-942c-7236a8d50aae" (UID: "0ab39419-3804-460a-942c-7236a8d50aae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.834459 4802 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.834490 4802 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ab39419-3804-460a-942c-7236a8d50aae-ceph\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.834500 4802 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0ab39419-3804-460a-942c-7236a8d50aae-var-lib-manila\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.834509 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.834517 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9wdk\" (UniqueName: \"kubernetes.io/projected/0ab39419-3804-460a-942c-7236a8d50aae-kube-api-access-b9wdk\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.834527 4802 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ab39419-3804-460a-942c-7236a8d50aae-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.834536 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.856622 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-config-data" (OuterVolumeSpecName: "config-data") pod "0ab39419-3804-460a-942c-7236a8d50aae" (UID: "0ab39419-3804-460a-942c-7236a8d50aae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:07 crc kubenswrapper[4802]: I1004 05:43:07.935609 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab39419-3804-460a-942c-7236a8d50aae-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.130458 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81d6574-95c4-4583-893a-87f8a22d6162","Type":"ContainerStarted","Data":"ed1ef453b105c38db2698f94306af0cf23899cfc3d43167cb501068f6202c017"} Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.133118 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0ab39419-3804-460a-942c-7236a8d50aae","Type":"ContainerDied","Data":"0846bd70282ed58d3b30cdfede134e927930641bd33b074cfda6a12288e93a49"} Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.133161 4802 scope.go:117] "RemoveContainer" containerID="8db3b4ac27fd6f7f4579e921c4fcec2a9b6bdc5f95985f573fad1a8ec4bb7d89" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.133286 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.167520 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.168894 4802 scope.go:117] "RemoveContainer" containerID="f0917396e2f45ff2f1b0d2315aa7d237af8bdc1db8e850b6992ddab81eee2b9a" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.186500 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.192470 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Oct 04 05:43:08 crc kubenswrapper[4802]: E1004 05:43:08.192970 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab39419-3804-460a-942c-7236a8d50aae" containerName="manila-share" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.192992 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab39419-3804-460a-942c-7236a8d50aae" containerName="manila-share" Oct 04 05:43:08 crc kubenswrapper[4802]: E1004 05:43:08.193004 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab39419-3804-460a-942c-7236a8d50aae" containerName="probe" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.193010 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab39419-3804-460a-942c-7236a8d50aae" containerName="probe" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.193206 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab39419-3804-460a-942c-7236a8d50aae" containerName="probe" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.193235 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab39419-3804-460a-942c-7236a8d50aae" containerName="manila-share" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.194450 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.196306 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.203209 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.242396 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-scripts\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.242469 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2psdc\" (UniqueName: \"kubernetes.io/projected/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-kube-api-access-2psdc\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.242523 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.242689 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-ceph\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.242783 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-config-data\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.242830 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.242916 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.242954 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.344998 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.345109 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-ceph\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.345164 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-config-data\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.345184 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.345221 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.345244 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.345285 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-scripts\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.345301 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2psdc\" (UniqueName: \"kubernetes.io/projected/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-kube-api-access-2psdc\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.345502 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.346386 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.350995 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-scripts\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.351151 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-ceph\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.351223 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.351309 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-config-data\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.351844 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.360386 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2psdc\" (UniqueName: \"kubernetes.io/projected/779c01dc-3f55-4f94-9f1d-b78f6aa256f3-kube-api-access-2psdc\") pod \"manila-share-share1-0\" (UID: \"779c01dc-3f55-4f94-9f1d-b78f6aa256f3\") " pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.370369 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab39419-3804-460a-942c-7236a8d50aae" path="/var/lib/kubelet/pods/0ab39419-3804-460a-942c-7236a8d50aae/volumes" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.517950 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 04 05:43:08 crc kubenswrapper[4802]: I1004 05:43:08.774801 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 05:43:09 crc kubenswrapper[4802]: I1004 05:43:09.118517 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 04 05:43:09 crc kubenswrapper[4802]: I1004 05:43:09.145766 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"779c01dc-3f55-4f94-9f1d-b78f6aa256f3","Type":"ContainerStarted","Data":"cee018ff3d0205cbee9497ea17838169a73221b6e75fa6b91d6da0a753d7759e"} Oct 04 05:43:09 crc kubenswrapper[4802]: I1004 05:43:09.153661 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81d6574-95c4-4583-893a-87f8a22d6162","Type":"ContainerStarted","Data":"67ef4cd4411c01b8fd278f7f307834f8a4aefe6f4d0132d69ac260cd73694950"} Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.186311 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"779c01dc-3f55-4f94-9f1d-b78f6aa256f3","Type":"ContainerStarted","Data":"9a3f64d3a1ece9ac96309b436693b65b552d060241cf74600bc566b34bd4a035"} Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.220973 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81d6574-95c4-4583-893a-87f8a22d6162","Type":"ContainerStarted","Data":"be745d886a68514159c1a1af532232bf36f84dbf229292046a907ac4a5dd5df0"} Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.247008 4802 generic.go:334] "Generic (PLEG): container finished" podID="15473ce1-4cda-4edf-83f5-56bba2dbcf0c" containerID="069bec6400f2cf051c410edc1aaf930847a36235dff7a63a36443af5e686db58" exitCode=0 Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.247057 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"15473ce1-4cda-4edf-83f5-56bba2dbcf0c","Type":"ContainerDied","Data":"069bec6400f2cf051c410edc1aaf930847a36235dff7a63a36443af5e686db58"} Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.521883 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.697948 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5flrz\" (UniqueName: \"kubernetes.io/projected/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-kube-api-access-5flrz\") pod \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.698307 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-config-data\") pod \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.698426 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-config-data-custom\") pod \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.698458 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-combined-ca-bundle\") pod \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.698543 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-scripts\") pod \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.698577 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-etc-machine-id\") pod \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\" (UID: \"15473ce1-4cda-4edf-83f5-56bba2dbcf0c\") " Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.706133 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-scripts" (OuterVolumeSpecName: "scripts") pod "15473ce1-4cda-4edf-83f5-56bba2dbcf0c" (UID: "15473ce1-4cda-4edf-83f5-56bba2dbcf0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.707513 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "15473ce1-4cda-4edf-83f5-56bba2dbcf0c" (UID: "15473ce1-4cda-4edf-83f5-56bba2dbcf0c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.707876 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "15473ce1-4cda-4edf-83f5-56bba2dbcf0c" (UID: "15473ce1-4cda-4edf-83f5-56bba2dbcf0c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.712816 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-kube-api-access-5flrz" (OuterVolumeSpecName: "kube-api-access-5flrz") pod "15473ce1-4cda-4edf-83f5-56bba2dbcf0c" (UID: "15473ce1-4cda-4edf-83f5-56bba2dbcf0c"). InnerVolumeSpecName "kube-api-access-5flrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.771758 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15473ce1-4cda-4edf-83f5-56bba2dbcf0c" (UID: "15473ce1-4cda-4edf-83f5-56bba2dbcf0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.808975 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5flrz\" (UniqueName: \"kubernetes.io/projected/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-kube-api-access-5flrz\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.809007 4802 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.809016 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.809026 4802 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-scripts\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.809034 4802 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.840335 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-config-data" (OuterVolumeSpecName: "config-data") pod "15473ce1-4cda-4edf-83f5-56bba2dbcf0c" (UID: "15473ce1-4cda-4edf-83f5-56bba2dbcf0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:43:10 crc kubenswrapper[4802]: I1004 05:43:10.910856 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15473ce1-4cda-4edf-83f5-56bba2dbcf0c-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.261328 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81d6574-95c4-4583-893a-87f8a22d6162","Type":"ContainerStarted","Data":"f9b79641cbd042b08d84e0a0aab47628dc03faf0f86e442acfb0626b7cdc4347"} Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.261724 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.264065 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.264067 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"15473ce1-4cda-4edf-83f5-56bba2dbcf0c","Type":"ContainerDied","Data":"d1fb0ad3cd09e49d50cc9c5cb3bb29570ebe05baf17836ddc6f87e518ee0704a"} Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.264137 4802 scope.go:117] "RemoveContainer" containerID="04d4cced873ecb5597441d2d4a28b5123c16ee6ae9ac15eed0841ef6ff7e164d" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.268300 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"779c01dc-3f55-4f94-9f1d-b78f6aa256f3","Type":"ContainerStarted","Data":"9503c423b7990d32035e4d0a0f4e464095a8cc5c8ccaff20abaf99c93166575f"} Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.286228 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.647670778 podStartE2EDuration="7.28621227s" podCreationTimestamp="2025-10-04 05:43:04 +0000 UTC" firstStartedPulling="2025-10-04 05:43:04.996243647 +0000 UTC m=+3427.404244272" lastFinishedPulling="2025-10-04 05:43:10.634785139 +0000 UTC m=+3433.042785764" observedRunningTime="2025-10-04 05:43:11.281428526 +0000 UTC m=+3433.689429151" watchObservedRunningTime="2025-10-04 05:43:11.28621227 +0000 UTC m=+3433.694212895" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.298447 4802 scope.go:117] "RemoveContainer" containerID="069bec6400f2cf051c410edc1aaf930847a36235dff7a63a36443af5e686db58" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.313654 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.328722 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.343713 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Oct 04 05:43:11 crc kubenswrapper[4802]: E1004 05:43:11.344149 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15473ce1-4cda-4edf-83f5-56bba2dbcf0c" containerName="probe" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.344167 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="15473ce1-4cda-4edf-83f5-56bba2dbcf0c" containerName="probe" Oct 04 05:43:11 crc kubenswrapper[4802]: E1004 05:43:11.344180 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15473ce1-4cda-4edf-83f5-56bba2dbcf0c" containerName="manila-scheduler" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.344187 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="15473ce1-4cda-4edf-83f5-56bba2dbcf0c" containerName="manila-scheduler" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.344397 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="15473ce1-4cda-4edf-83f5-56bba2dbcf0c" containerName="probe" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.344429 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="15473ce1-4cda-4edf-83f5-56bba2dbcf0c" containerName="manila-scheduler" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.345431 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.348119 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.351257 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.3512368710000002 podStartE2EDuration="3.351236871s" podCreationTimestamp="2025-10-04 05:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:43:11.334444581 +0000 UTC m=+3433.742445206" watchObservedRunningTime="2025-10-04 05:43:11.351236871 +0000 UTC m=+3433.759237496" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.367370 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.420847 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr5b2\" (UniqueName: \"kubernetes.io/projected/0452628c-712f-42ea-877a-39363e757b7f-kube-api-access-wr5b2\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.420935 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0452628c-712f-42ea-877a-39363e757b7f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.420995 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0452628c-712f-42ea-877a-39363e757b7f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.421112 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0452628c-712f-42ea-877a-39363e757b7f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.421164 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0452628c-712f-42ea-877a-39363e757b7f-scripts\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.421270 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0452628c-712f-42ea-877a-39363e757b7f-config-data\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.523210 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0452628c-712f-42ea-877a-39363e757b7f-scripts\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.523360 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0452628c-712f-42ea-877a-39363e757b7f-config-data\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.523419 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr5b2\" (UniqueName: \"kubernetes.io/projected/0452628c-712f-42ea-877a-39363e757b7f-kube-api-access-wr5b2\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.523456 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0452628c-712f-42ea-877a-39363e757b7f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.523495 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0452628c-712f-42ea-877a-39363e757b7f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.523577 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0452628c-712f-42ea-877a-39363e757b7f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.524540 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0452628c-712f-42ea-877a-39363e757b7f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.530581 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0452628c-712f-42ea-877a-39363e757b7f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.531023 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0452628c-712f-42ea-877a-39363e757b7f-config-data\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.531057 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0452628c-712f-42ea-877a-39363e757b7f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.532143 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0452628c-712f-42ea-877a-39363e757b7f-scripts\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.545862 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr5b2\" (UniqueName: \"kubernetes.io/projected/0452628c-712f-42ea-877a-39363e757b7f-kube-api-access-wr5b2\") pod \"manila-scheduler-0\" (UID: \"0452628c-712f-42ea-877a-39363e757b7f\") " pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.676113 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.886032 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6xhw7"] Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.888921 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.903266 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6xhw7"] Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.930234 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c832dac3-bd67-4012-bdac-810c7f403b17-catalog-content\") pod \"community-operators-6xhw7\" (UID: \"c832dac3-bd67-4012-bdac-810c7f403b17\") " pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.930474 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c832dac3-bd67-4012-bdac-810c7f403b17-utilities\") pod \"community-operators-6xhw7\" (UID: \"c832dac3-bd67-4012-bdac-810c7f403b17\") " pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:11 crc kubenswrapper[4802]: I1004 05:43:11.930753 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4gj5\" (UniqueName: \"kubernetes.io/projected/c832dac3-bd67-4012-bdac-810c7f403b17-kube-api-access-q4gj5\") pod \"community-operators-6xhw7\" (UID: \"c832dac3-bd67-4012-bdac-810c7f403b17\") " pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:12 crc kubenswrapper[4802]: I1004 05:43:12.033383 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4gj5\" (UniqueName: \"kubernetes.io/projected/c832dac3-bd67-4012-bdac-810c7f403b17-kube-api-access-q4gj5\") pod \"community-operators-6xhw7\" (UID: \"c832dac3-bd67-4012-bdac-810c7f403b17\") " pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:12 crc kubenswrapper[4802]: I1004 05:43:12.033687 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c832dac3-bd67-4012-bdac-810c7f403b17-catalog-content\") pod \"community-operators-6xhw7\" (UID: \"c832dac3-bd67-4012-bdac-810c7f403b17\") " pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:12 crc kubenswrapper[4802]: I1004 05:43:12.033769 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c832dac3-bd67-4012-bdac-810c7f403b17-utilities\") pod \"community-operators-6xhw7\" (UID: \"c832dac3-bd67-4012-bdac-810c7f403b17\") " pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:12 crc kubenswrapper[4802]: I1004 05:43:12.034303 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c832dac3-bd67-4012-bdac-810c7f403b17-utilities\") pod \"community-operators-6xhw7\" (UID: \"c832dac3-bd67-4012-bdac-810c7f403b17\") " pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:12 crc kubenswrapper[4802]: I1004 05:43:12.034568 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c832dac3-bd67-4012-bdac-810c7f403b17-catalog-content\") pod \"community-operators-6xhw7\" (UID: \"c832dac3-bd67-4012-bdac-810c7f403b17\") " pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:12 crc kubenswrapper[4802]: I1004 05:43:12.054815 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4gj5\" (UniqueName: \"kubernetes.io/projected/c832dac3-bd67-4012-bdac-810c7f403b17-kube-api-access-q4gj5\") pod \"community-operators-6xhw7\" (UID: \"c832dac3-bd67-4012-bdac-810c7f403b17\") " pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:12 crc kubenswrapper[4802]: W1004 05:43:12.214203 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0452628c_712f_42ea_877a_39363e757b7f.slice/crio-00b758ffc80694e3690d51e920ba41282710d1d871e881e7c960510ebfd0fb4d WatchSource:0}: Error finding container 00b758ffc80694e3690d51e920ba41282710d1d871e881e7c960510ebfd0fb4d: Status 404 returned error can't find the container with id 00b758ffc80694e3690d51e920ba41282710d1d871e881e7c960510ebfd0fb4d Oct 04 05:43:12 crc kubenswrapper[4802]: I1004 05:43:12.214770 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 04 05:43:12 crc kubenswrapper[4802]: I1004 05:43:12.229104 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:12 crc kubenswrapper[4802]: I1004 05:43:12.269335 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Oct 04 05:43:12 crc kubenswrapper[4802]: I1004 05:43:12.277723 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"0452628c-712f-42ea-877a-39363e757b7f","Type":"ContainerStarted","Data":"00b758ffc80694e3690d51e920ba41282710d1d871e881e7c960510ebfd0fb4d"} Oct 04 05:43:12 crc kubenswrapper[4802]: I1004 05:43:12.378241 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15473ce1-4cda-4edf-83f5-56bba2dbcf0c" path="/var/lib/kubelet/pods/15473ce1-4cda-4edf-83f5-56bba2dbcf0c/volumes" Oct 04 05:43:12 crc kubenswrapper[4802]: I1004 05:43:12.868768 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6xhw7"] Oct 04 05:43:13 crc kubenswrapper[4802]: I1004 05:43:13.293159 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"0452628c-712f-42ea-877a-39363e757b7f","Type":"ContainerStarted","Data":"892a33b2ac944cc8bd029de587bf07738ea4302b1ee2b69699bf95220081c41d"} Oct 04 05:43:13 crc kubenswrapper[4802]: I1004 05:43:13.293432 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"0452628c-712f-42ea-877a-39363e757b7f","Type":"ContainerStarted","Data":"1f04cb6a4394ebece8205e2e5de6c46bd97ff6063233539918b80650aae3ee40"} Oct 04 05:43:13 crc kubenswrapper[4802]: I1004 05:43:13.297586 4802 generic.go:334] "Generic (PLEG): container finished" podID="c832dac3-bd67-4012-bdac-810c7f403b17" containerID="478de372cbe45b3837d9a6f3cb0af338ab30a694851dcc6d8d522d71e4beb7e9" exitCode=0 Oct 04 05:43:13 crc kubenswrapper[4802]: I1004 05:43:13.297673 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xhw7" event={"ID":"c832dac3-bd67-4012-bdac-810c7f403b17","Type":"ContainerDied","Data":"478de372cbe45b3837d9a6f3cb0af338ab30a694851dcc6d8d522d71e4beb7e9"} Oct 04 05:43:13 crc kubenswrapper[4802]: I1004 05:43:13.297734 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xhw7" event={"ID":"c832dac3-bd67-4012-bdac-810c7f403b17","Type":"ContainerStarted","Data":"114c20d48a0cf04d752bf76702210aa8628a5fee54347d71f354edcf2650dee9"} Oct 04 05:43:13 crc kubenswrapper[4802]: I1004 05:43:13.317330 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.317308206 podStartE2EDuration="2.317308206s" podCreationTimestamp="2025-10-04 05:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 05:43:13.314423015 +0000 UTC m=+3435.722423670" watchObservedRunningTime="2025-10-04 05:43:13.317308206 +0000 UTC m=+3435.725308831" Oct 04 05:43:15 crc kubenswrapper[4802]: I1004 05:43:15.315343 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xhw7" event={"ID":"c832dac3-bd67-4012-bdac-810c7f403b17","Type":"ContainerStarted","Data":"5a2b105f870ec139aeff812194bbaa47760ba82e2f0ed95d05b63bf0a4398531"} Oct 04 05:43:16 crc kubenswrapper[4802]: I1004 05:43:16.329300 4802 generic.go:334] "Generic (PLEG): container finished" podID="c832dac3-bd67-4012-bdac-810c7f403b17" containerID="5a2b105f870ec139aeff812194bbaa47760ba82e2f0ed95d05b63bf0a4398531" exitCode=0 Oct 04 05:43:16 crc kubenswrapper[4802]: I1004 05:43:16.329337 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xhw7" event={"ID":"c832dac3-bd67-4012-bdac-810c7f403b17","Type":"ContainerDied","Data":"5a2b105f870ec139aeff812194bbaa47760ba82e2f0ed95d05b63bf0a4398531"} Oct 04 05:43:17 crc kubenswrapper[4802]: I1004 05:43:17.341565 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xhw7" event={"ID":"c832dac3-bd67-4012-bdac-810c7f403b17","Type":"ContainerStarted","Data":"986e6ca44792e7fc26c48c9872fd3dd5e5788eb5804c5d595afcb694207294e4"} Oct 04 05:43:17 crc kubenswrapper[4802]: I1004 05:43:17.380795 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6xhw7" podStartSLOduration=2.805793805 podStartE2EDuration="6.380769142s" podCreationTimestamp="2025-10-04 05:43:11 +0000 UTC" firstStartedPulling="2025-10-04 05:43:13.299979421 +0000 UTC m=+3435.707980046" lastFinishedPulling="2025-10-04 05:43:16.874954758 +0000 UTC m=+3439.282955383" observedRunningTime="2025-10-04 05:43:17.372085539 +0000 UTC m=+3439.780086164" watchObservedRunningTime="2025-10-04 05:43:17.380769142 +0000 UTC m=+3439.788769767" Oct 04 05:43:18 crc kubenswrapper[4802]: I1004 05:43:18.519106 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Oct 04 05:43:21 crc kubenswrapper[4802]: I1004 05:43:21.676480 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Oct 04 05:43:22 crc kubenswrapper[4802]: I1004 05:43:22.230203 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:22 crc kubenswrapper[4802]: I1004 05:43:22.230247 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:23 crc kubenswrapper[4802]: I1004 05:43:23.272921 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6xhw7" podUID="c832dac3-bd67-4012-bdac-810c7f403b17" containerName="registry-server" probeResult="failure" output=< Oct 04 05:43:23 crc kubenswrapper[4802]: timeout: failed to connect service ":50051" within 1s Oct 04 05:43:23 crc kubenswrapper[4802]: > Oct 04 05:43:30 crc kubenswrapper[4802]: I1004 05:43:30.046959 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Oct 04 05:43:32 crc kubenswrapper[4802]: I1004 05:43:32.297736 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:32 crc kubenswrapper[4802]: I1004 05:43:32.354197 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:32 crc kubenswrapper[4802]: I1004 05:43:32.543609 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6xhw7"] Oct 04 05:43:33 crc kubenswrapper[4802]: I1004 05:43:33.250542 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Oct 04 05:43:33 crc kubenswrapper[4802]: I1004 05:43:33.491004 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6xhw7" podUID="c832dac3-bd67-4012-bdac-810c7f403b17" containerName="registry-server" containerID="cri-o://986e6ca44792e7fc26c48c9872fd3dd5e5788eb5804c5d595afcb694207294e4" gracePeriod=2 Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.110576 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.254700 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4gj5\" (UniqueName: \"kubernetes.io/projected/c832dac3-bd67-4012-bdac-810c7f403b17-kube-api-access-q4gj5\") pod \"c832dac3-bd67-4012-bdac-810c7f403b17\" (UID: \"c832dac3-bd67-4012-bdac-810c7f403b17\") " Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.254774 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c832dac3-bd67-4012-bdac-810c7f403b17-catalog-content\") pod \"c832dac3-bd67-4012-bdac-810c7f403b17\" (UID: \"c832dac3-bd67-4012-bdac-810c7f403b17\") " Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.255169 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c832dac3-bd67-4012-bdac-810c7f403b17-utilities\") pod \"c832dac3-bd67-4012-bdac-810c7f403b17\" (UID: \"c832dac3-bd67-4012-bdac-810c7f403b17\") " Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.256037 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c832dac3-bd67-4012-bdac-810c7f403b17-utilities" (OuterVolumeSpecName: "utilities") pod "c832dac3-bd67-4012-bdac-810c7f403b17" (UID: "c832dac3-bd67-4012-bdac-810c7f403b17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.260621 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c832dac3-bd67-4012-bdac-810c7f403b17-kube-api-access-q4gj5" (OuterVolumeSpecName: "kube-api-access-q4gj5") pod "c832dac3-bd67-4012-bdac-810c7f403b17" (UID: "c832dac3-bd67-4012-bdac-810c7f403b17"). InnerVolumeSpecName "kube-api-access-q4gj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.316569 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c832dac3-bd67-4012-bdac-810c7f403b17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c832dac3-bd67-4012-bdac-810c7f403b17" (UID: "c832dac3-bd67-4012-bdac-810c7f403b17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.357411 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c832dac3-bd67-4012-bdac-810c7f403b17-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.357435 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4gj5\" (UniqueName: \"kubernetes.io/projected/c832dac3-bd67-4012-bdac-810c7f403b17-kube-api-access-q4gj5\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.357444 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c832dac3-bd67-4012-bdac-810c7f403b17-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.501246 4802 generic.go:334] "Generic (PLEG): container finished" podID="c832dac3-bd67-4012-bdac-810c7f403b17" containerID="986e6ca44792e7fc26c48c9872fd3dd5e5788eb5804c5d595afcb694207294e4" exitCode=0 Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.501295 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xhw7" event={"ID":"c832dac3-bd67-4012-bdac-810c7f403b17","Type":"ContainerDied","Data":"986e6ca44792e7fc26c48c9872fd3dd5e5788eb5804c5d595afcb694207294e4"} Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.501325 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xhw7" event={"ID":"c832dac3-bd67-4012-bdac-810c7f403b17","Type":"ContainerDied","Data":"114c20d48a0cf04d752bf76702210aa8628a5fee54347d71f354edcf2650dee9"} Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.501347 4802 scope.go:117] "RemoveContainer" containerID="986e6ca44792e7fc26c48c9872fd3dd5e5788eb5804c5d595afcb694207294e4" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.501501 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xhw7" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.527608 4802 scope.go:117] "RemoveContainer" containerID="5a2b105f870ec139aeff812194bbaa47760ba82e2f0ed95d05b63bf0a4398531" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.529171 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6xhw7"] Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.530116 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.537404 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6xhw7"] Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.564953 4802 scope.go:117] "RemoveContainer" containerID="478de372cbe45b3837d9a6f3cb0af338ab30a694851dcc6d8d522d71e4beb7e9" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.610840 4802 scope.go:117] "RemoveContainer" containerID="986e6ca44792e7fc26c48c9872fd3dd5e5788eb5804c5d595afcb694207294e4" Oct 04 05:43:34 crc kubenswrapper[4802]: E1004 05:43:34.611537 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"986e6ca44792e7fc26c48c9872fd3dd5e5788eb5804c5d595afcb694207294e4\": container with ID starting with 986e6ca44792e7fc26c48c9872fd3dd5e5788eb5804c5d595afcb694207294e4 not found: ID does not exist" containerID="986e6ca44792e7fc26c48c9872fd3dd5e5788eb5804c5d595afcb694207294e4" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.611606 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986e6ca44792e7fc26c48c9872fd3dd5e5788eb5804c5d595afcb694207294e4"} err="failed to get container status \"986e6ca44792e7fc26c48c9872fd3dd5e5788eb5804c5d595afcb694207294e4\": rpc error: code = NotFound desc = could not find container \"986e6ca44792e7fc26c48c9872fd3dd5e5788eb5804c5d595afcb694207294e4\": container with ID starting with 986e6ca44792e7fc26c48c9872fd3dd5e5788eb5804c5d595afcb694207294e4 not found: ID does not exist" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.611718 4802 scope.go:117] "RemoveContainer" containerID="5a2b105f870ec139aeff812194bbaa47760ba82e2f0ed95d05b63bf0a4398531" Oct 04 05:43:34 crc kubenswrapper[4802]: E1004 05:43:34.612051 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a2b105f870ec139aeff812194bbaa47760ba82e2f0ed95d05b63bf0a4398531\": container with ID starting with 5a2b105f870ec139aeff812194bbaa47760ba82e2f0ed95d05b63bf0a4398531 not found: ID does not exist" containerID="5a2b105f870ec139aeff812194bbaa47760ba82e2f0ed95d05b63bf0a4398531" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.612084 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a2b105f870ec139aeff812194bbaa47760ba82e2f0ed95d05b63bf0a4398531"} err="failed to get container status \"5a2b105f870ec139aeff812194bbaa47760ba82e2f0ed95d05b63bf0a4398531\": rpc error: code = NotFound desc = could not find container \"5a2b105f870ec139aeff812194bbaa47760ba82e2f0ed95d05b63bf0a4398531\": container with ID starting with 5a2b105f870ec139aeff812194bbaa47760ba82e2f0ed95d05b63bf0a4398531 not found: ID does not exist" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.612106 4802 scope.go:117] "RemoveContainer" containerID="478de372cbe45b3837d9a6f3cb0af338ab30a694851dcc6d8d522d71e4beb7e9" Oct 04 05:43:34 crc kubenswrapper[4802]: E1004 05:43:34.612322 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"478de372cbe45b3837d9a6f3cb0af338ab30a694851dcc6d8d522d71e4beb7e9\": container with ID starting with 478de372cbe45b3837d9a6f3cb0af338ab30a694851dcc6d8d522d71e4beb7e9 not found: ID does not exist" containerID="478de372cbe45b3837d9a6f3cb0af338ab30a694851dcc6d8d522d71e4beb7e9" Oct 04 05:43:34 crc kubenswrapper[4802]: I1004 05:43:34.612342 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"478de372cbe45b3837d9a6f3cb0af338ab30a694851dcc6d8d522d71e4beb7e9"} err="failed to get container status \"478de372cbe45b3837d9a6f3cb0af338ab30a694851dcc6d8d522d71e4beb7e9\": rpc error: code = NotFound desc = could not find container \"478de372cbe45b3837d9a6f3cb0af338ab30a694851dcc6d8d522d71e4beb7e9\": container with ID starting with 478de372cbe45b3837d9a6f3cb0af338ab30a694851dcc6d8d522d71e4beb7e9 not found: ID does not exist" Oct 04 05:43:36 crc kubenswrapper[4802]: I1004 05:43:36.371980 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c832dac3-bd67-4012-bdac-810c7f403b17" path="/var/lib/kubelet/pods/c832dac3-bd67-4012-bdac-810c7f403b17/volumes" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.034281 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Oct 04 05:44:24 crc kubenswrapper[4802]: E1004 05:44:24.035201 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c832dac3-bd67-4012-bdac-810c7f403b17" containerName="extract-content" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.035217 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c832dac3-bd67-4012-bdac-810c7f403b17" containerName="extract-content" Oct 04 05:44:24 crc kubenswrapper[4802]: E1004 05:44:24.035230 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c832dac3-bd67-4012-bdac-810c7f403b17" containerName="registry-server" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.035236 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c832dac3-bd67-4012-bdac-810c7f403b17" containerName="registry-server" Oct 04 05:44:24 crc kubenswrapper[4802]: E1004 05:44:24.035259 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c832dac3-bd67-4012-bdac-810c7f403b17" containerName="extract-utilities" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.035266 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c832dac3-bd67-4012-bdac-810c7f403b17" containerName="extract-utilities" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.035457 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="c832dac3-bd67-4012-bdac-810c7f403b17" containerName="registry-server" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.036116 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.038960 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.039274 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.039618 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mncbg" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.041567 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.041697 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.041747 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-config-data\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.041945 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.047714 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.143361 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.143433 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.143473 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6h6\" (UniqueName: \"kubernetes.io/projected/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-kube-api-access-9r6h6\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.143527 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.143566 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.143612 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.143667 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.143698 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-config-data\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.143720 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.144927 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.148082 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-config-data\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.157633 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.246266 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.246735 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.246802 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.246811 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r6h6\" (UniqueName: \"kubernetes.io/projected/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-kube-api-access-9r6h6\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.247098 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.247180 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.247360 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.247450 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.247903 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.256122 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.257496 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.274774 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r6h6\" (UniqueName: \"kubernetes.io/projected/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-kube-api-access-9r6h6\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.277857 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.428205 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 04 05:44:24 crc kubenswrapper[4802]: I1004 05:44:24.871359 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 04 05:44:24 crc kubenswrapper[4802]: W1004 05:44:24.873291 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ff51956_c2e9_4e25_9cd4_56bb6304b7db.slice/crio-473717d0204abce557f70251bd6448bff07dcde8b840e8724d6db7a521d92efb WatchSource:0}: Error finding container 473717d0204abce557f70251bd6448bff07dcde8b840e8724d6db7a521d92efb: Status 404 returned error can't find the container with id 473717d0204abce557f70251bd6448bff07dcde8b840e8724d6db7a521d92efb Oct 04 05:44:25 crc kubenswrapper[4802]: I1004 05:44:25.015236 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9ff51956-c2e9-4e25-9cd4-56bb6304b7db","Type":"ContainerStarted","Data":"473717d0204abce557f70251bd6448bff07dcde8b840e8724d6db7a521d92efb"} Oct 04 05:44:51 crc kubenswrapper[4802]: E1004 05:44:51.833071 4802 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Oct 04 05:44:51 crc kubenswrapper[4802]: E1004 05:44:51.833804 4802 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9r6h6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(9ff51956-c2e9-4e25-9cd4-56bb6304b7db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 04 05:44:51 crc kubenswrapper[4802]: E1004 05:44:51.835021 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="9ff51956-c2e9-4e25-9cd4-56bb6304b7db" Oct 04 05:44:52 crc kubenswrapper[4802]: E1004 05:44:52.291982 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="9ff51956-c2e9-4e25-9cd4-56bb6304b7db" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.170175 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2"] Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.173372 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.175805 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.177139 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.185231 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2"] Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.294396 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7smh\" (UniqueName: \"kubernetes.io/projected/9646d53b-fea6-4a36-9984-5266dab4cc0f-kube-api-access-x7smh\") pod \"collect-profiles-29325945-6l8f2\" (UID: \"9646d53b-fea6-4a36-9984-5266dab4cc0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.294582 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9646d53b-fea6-4a36-9984-5266dab4cc0f-config-volume\") pod \"collect-profiles-29325945-6l8f2\" (UID: \"9646d53b-fea6-4a36-9984-5266dab4cc0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.294906 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9646d53b-fea6-4a36-9984-5266dab4cc0f-secret-volume\") pod \"collect-profiles-29325945-6l8f2\" (UID: \"9646d53b-fea6-4a36-9984-5266dab4cc0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.396444 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7smh\" (UniqueName: \"kubernetes.io/projected/9646d53b-fea6-4a36-9984-5266dab4cc0f-kube-api-access-x7smh\") pod \"collect-profiles-29325945-6l8f2\" (UID: \"9646d53b-fea6-4a36-9984-5266dab4cc0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.396793 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9646d53b-fea6-4a36-9984-5266dab4cc0f-config-volume\") pod \"collect-profiles-29325945-6l8f2\" (UID: \"9646d53b-fea6-4a36-9984-5266dab4cc0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.397028 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9646d53b-fea6-4a36-9984-5266dab4cc0f-secret-volume\") pod \"collect-profiles-29325945-6l8f2\" (UID: \"9646d53b-fea6-4a36-9984-5266dab4cc0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.397755 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9646d53b-fea6-4a36-9984-5266dab4cc0f-config-volume\") pod \"collect-profiles-29325945-6l8f2\" (UID: \"9646d53b-fea6-4a36-9984-5266dab4cc0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.404321 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9646d53b-fea6-4a36-9984-5266dab4cc0f-secret-volume\") pod \"collect-profiles-29325945-6l8f2\" (UID: \"9646d53b-fea6-4a36-9984-5266dab4cc0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.425195 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7smh\" (UniqueName: \"kubernetes.io/projected/9646d53b-fea6-4a36-9984-5266dab4cc0f-kube-api-access-x7smh\") pod \"collect-profiles-29325945-6l8f2\" (UID: \"9646d53b-fea6-4a36-9984-5266dab4cc0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.493830 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" Oct 04 05:45:00 crc kubenswrapper[4802]: I1004 05:45:00.956007 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2"] Oct 04 05:45:01 crc kubenswrapper[4802]: I1004 05:45:01.367185 4802 generic.go:334] "Generic (PLEG): container finished" podID="9646d53b-fea6-4a36-9984-5266dab4cc0f" containerID="7251d545b48d63c8029fc3fcf876281b0918b951f50e956f293019bcaa171d43" exitCode=0 Oct 04 05:45:01 crc kubenswrapper[4802]: I1004 05:45:01.367229 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" event={"ID":"9646d53b-fea6-4a36-9984-5266dab4cc0f","Type":"ContainerDied","Data":"7251d545b48d63c8029fc3fcf876281b0918b951f50e956f293019bcaa171d43"} Oct 04 05:45:01 crc kubenswrapper[4802]: I1004 05:45:01.367260 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" event={"ID":"9646d53b-fea6-4a36-9984-5266dab4cc0f","Type":"ContainerStarted","Data":"e85fb976dcde09d16c709d2d251c80d7c889cd9e545c30a7bb5308364f6cb95c"} Oct 04 05:45:02 crc kubenswrapper[4802]: I1004 05:45:02.711364 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" Oct 04 05:45:02 crc kubenswrapper[4802]: I1004 05:45:02.845846 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7smh\" (UniqueName: \"kubernetes.io/projected/9646d53b-fea6-4a36-9984-5266dab4cc0f-kube-api-access-x7smh\") pod \"9646d53b-fea6-4a36-9984-5266dab4cc0f\" (UID: \"9646d53b-fea6-4a36-9984-5266dab4cc0f\") " Oct 04 05:45:02 crc kubenswrapper[4802]: I1004 05:45:02.845906 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9646d53b-fea6-4a36-9984-5266dab4cc0f-secret-volume\") pod \"9646d53b-fea6-4a36-9984-5266dab4cc0f\" (UID: \"9646d53b-fea6-4a36-9984-5266dab4cc0f\") " Oct 04 05:45:02 crc kubenswrapper[4802]: I1004 05:45:02.845977 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9646d53b-fea6-4a36-9984-5266dab4cc0f-config-volume\") pod \"9646d53b-fea6-4a36-9984-5266dab4cc0f\" (UID: \"9646d53b-fea6-4a36-9984-5266dab4cc0f\") " Oct 04 05:45:02 crc kubenswrapper[4802]: I1004 05:45:02.846964 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9646d53b-fea6-4a36-9984-5266dab4cc0f-config-volume" (OuterVolumeSpecName: "config-volume") pod "9646d53b-fea6-4a36-9984-5266dab4cc0f" (UID: "9646d53b-fea6-4a36-9984-5266dab4cc0f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 05:45:02 crc kubenswrapper[4802]: I1004 05:45:02.851654 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9646d53b-fea6-4a36-9984-5266dab4cc0f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9646d53b-fea6-4a36-9984-5266dab4cc0f" (UID: "9646d53b-fea6-4a36-9984-5266dab4cc0f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 05:45:02 crc kubenswrapper[4802]: I1004 05:45:02.852364 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9646d53b-fea6-4a36-9984-5266dab4cc0f-kube-api-access-x7smh" (OuterVolumeSpecName: "kube-api-access-x7smh") pod "9646d53b-fea6-4a36-9984-5266dab4cc0f" (UID: "9646d53b-fea6-4a36-9984-5266dab4cc0f"). InnerVolumeSpecName "kube-api-access-x7smh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:45:02 crc kubenswrapper[4802]: I1004 05:45:02.948208 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7smh\" (UniqueName: \"kubernetes.io/projected/9646d53b-fea6-4a36-9984-5266dab4cc0f-kube-api-access-x7smh\") on node \"crc\" DevicePath \"\"" Oct 04 05:45:02 crc kubenswrapper[4802]: I1004 05:45:02.948554 4802 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9646d53b-fea6-4a36-9984-5266dab4cc0f-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 04 05:45:02 crc kubenswrapper[4802]: I1004 05:45:02.948572 4802 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9646d53b-fea6-4a36-9984-5266dab4cc0f-config-volume\") on node \"crc\" DevicePath \"\"" Oct 04 05:45:03 crc kubenswrapper[4802]: I1004 05:45:03.390252 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" event={"ID":"9646d53b-fea6-4a36-9984-5266dab4cc0f","Type":"ContainerDied","Data":"e85fb976dcde09d16c709d2d251c80d7c889cd9e545c30a7bb5308364f6cb95c"} Oct 04 05:45:03 crc kubenswrapper[4802]: I1004 05:45:03.390321 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e85fb976dcde09d16c709d2d251c80d7c889cd9e545c30a7bb5308364f6cb95c" Oct 04 05:45:03 crc kubenswrapper[4802]: I1004 05:45:03.390418 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325945-6l8f2" Oct 04 05:45:03 crc kubenswrapper[4802]: I1004 05:45:03.790750 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5"] Oct 04 05:45:03 crc kubenswrapper[4802]: I1004 05:45:03.801511 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325900-bc4s5"] Oct 04 05:45:03 crc kubenswrapper[4802]: I1004 05:45:03.906504 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 04 05:45:04 crc kubenswrapper[4802]: I1004 05:45:04.372899 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a86efef7-cb21-48ee-aaaa-549dc94fbd1f" path="/var/lib/kubelet/pods/a86efef7-cb21-48ee-aaaa-549dc94fbd1f/volumes" Oct 04 05:45:05 crc kubenswrapper[4802]: I1004 05:45:05.412184 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9ff51956-c2e9-4e25-9cd4-56bb6304b7db","Type":"ContainerStarted","Data":"af116b74e694deacf7e2df1ef4a80cc45162275738010ba992f4280448ef0a59"} Oct 04 05:45:05 crc kubenswrapper[4802]: I1004 05:45:05.435967 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.407854105 podStartE2EDuration="42.435949161s" podCreationTimestamp="2025-10-04 05:44:23 +0000 UTC" firstStartedPulling="2025-10-04 05:44:24.875603368 +0000 UTC m=+3507.283603993" lastFinishedPulling="2025-10-04 05:45:03.903698424 +0000 UTC m=+3546.311699049" observedRunningTime="2025-10-04 05:45:05.428989111 +0000 UTC m=+3547.836989746" watchObservedRunningTime="2025-10-04 05:45:05.435949161 +0000 UTC m=+3547.843949786" Oct 04 05:45:15 crc kubenswrapper[4802]: I1004 05:45:15.144613 4802 scope.go:117] "RemoveContainer" containerID="b9f3aaad7d0dcefd40b9bf45027aa5cc5a09c50ccc6fbf0387beab907ec52f2e" Oct 04 05:45:22 crc kubenswrapper[4802]: I1004 05:45:22.663140 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:45:22 crc kubenswrapper[4802]: I1004 05:45:22.663721 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.177812 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s2dxk"] Oct 04 05:45:47 crc kubenswrapper[4802]: E1004 05:45:47.178797 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9646d53b-fea6-4a36-9984-5266dab4cc0f" containerName="collect-profiles" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.178814 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="9646d53b-fea6-4a36-9984-5266dab4cc0f" containerName="collect-profiles" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.179051 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="9646d53b-fea6-4a36-9984-5266dab4cc0f" containerName="collect-profiles" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.180900 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.198376 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s2dxk"] Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.284295 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9r7q\" (UniqueName: \"kubernetes.io/projected/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-kube-api-access-c9r7q\") pod \"certified-operators-s2dxk\" (UID: \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\") " pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.284409 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-utilities\") pod \"certified-operators-s2dxk\" (UID: \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\") " pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.284458 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-catalog-content\") pod \"certified-operators-s2dxk\" (UID: \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\") " pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.386698 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9r7q\" (UniqueName: \"kubernetes.io/projected/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-kube-api-access-c9r7q\") pod \"certified-operators-s2dxk\" (UID: \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\") " pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.386809 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-utilities\") pod \"certified-operators-s2dxk\" (UID: \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\") " pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.386895 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-catalog-content\") pod \"certified-operators-s2dxk\" (UID: \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\") " pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.387362 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-catalog-content\") pod \"certified-operators-s2dxk\" (UID: \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\") " pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.387875 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-utilities\") pod \"certified-operators-s2dxk\" (UID: \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\") " pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.405972 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9r7q\" (UniqueName: \"kubernetes.io/projected/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-kube-api-access-c9r7q\") pod \"certified-operators-s2dxk\" (UID: \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\") " pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:47 crc kubenswrapper[4802]: I1004 05:45:47.521162 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:48 crc kubenswrapper[4802]: I1004 05:45:48.008860 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s2dxk"] Oct 04 05:45:48 crc kubenswrapper[4802]: I1004 05:45:48.843249 4802 generic.go:334] "Generic (PLEG): container finished" podID="4ec911d3-fc70-4003-9f02-bc8d4b6c259f" containerID="3fbb71ca2acd1f849bca96a37d0b28c1042897ad6323af106ab522f5de96eeea" exitCode=0 Oct 04 05:45:48 crc kubenswrapper[4802]: I1004 05:45:48.843485 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2dxk" event={"ID":"4ec911d3-fc70-4003-9f02-bc8d4b6c259f","Type":"ContainerDied","Data":"3fbb71ca2acd1f849bca96a37d0b28c1042897ad6323af106ab522f5de96eeea"} Oct 04 05:45:48 crc kubenswrapper[4802]: I1004 05:45:48.843605 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2dxk" event={"ID":"4ec911d3-fc70-4003-9f02-bc8d4b6c259f","Type":"ContainerStarted","Data":"b429ba21db739b3fc21c719f72e26a48f98afcf6f0c059e27e3f87a50a4ccd61"} Oct 04 05:45:49 crc kubenswrapper[4802]: I1004 05:45:49.854274 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2dxk" event={"ID":"4ec911d3-fc70-4003-9f02-bc8d4b6c259f","Type":"ContainerStarted","Data":"aa59825719501831146b6f154dc4e4fafa4910a6bd02a7849ec46fa1374ba92f"} Oct 04 05:45:50 crc kubenswrapper[4802]: I1004 05:45:50.865396 4802 generic.go:334] "Generic (PLEG): container finished" podID="4ec911d3-fc70-4003-9f02-bc8d4b6c259f" containerID="aa59825719501831146b6f154dc4e4fafa4910a6bd02a7849ec46fa1374ba92f" exitCode=0 Oct 04 05:45:50 crc kubenswrapper[4802]: I1004 05:45:50.865443 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2dxk" event={"ID":"4ec911d3-fc70-4003-9f02-bc8d4b6c259f","Type":"ContainerDied","Data":"aa59825719501831146b6f154dc4e4fafa4910a6bd02a7849ec46fa1374ba92f"} Oct 04 05:45:51 crc kubenswrapper[4802]: I1004 05:45:51.875450 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2dxk" event={"ID":"4ec911d3-fc70-4003-9f02-bc8d4b6c259f","Type":"ContainerStarted","Data":"d7666b3694247386c4dc70e43ea7d0792c23a9b9615bc76d788a15b3f6bbaa29"} Oct 04 05:45:51 crc kubenswrapper[4802]: I1004 05:45:51.909182 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s2dxk" podStartSLOduration=2.478893461 podStartE2EDuration="4.909164917s" podCreationTimestamp="2025-10-04 05:45:47 +0000 UTC" firstStartedPulling="2025-10-04 05:45:48.849494733 +0000 UTC m=+3591.257495398" lastFinishedPulling="2025-10-04 05:45:51.279766219 +0000 UTC m=+3593.687766854" observedRunningTime="2025-10-04 05:45:51.903942036 +0000 UTC m=+3594.311942661" watchObservedRunningTime="2025-10-04 05:45:51.909164917 +0000 UTC m=+3594.317165542" Oct 04 05:45:52 crc kubenswrapper[4802]: I1004 05:45:52.666222 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:45:52 crc kubenswrapper[4802]: I1004 05:45:52.666548 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:45:57 crc kubenswrapper[4802]: I1004 05:45:57.521555 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:57 crc kubenswrapper[4802]: I1004 05:45:57.522219 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:57 crc kubenswrapper[4802]: I1004 05:45:57.589083 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:57 crc kubenswrapper[4802]: I1004 05:45:57.985319 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:45:58 crc kubenswrapper[4802]: I1004 05:45:58.041172 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s2dxk"] Oct 04 05:45:59 crc kubenswrapper[4802]: I1004 05:45:59.952243 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s2dxk" podUID="4ec911d3-fc70-4003-9f02-bc8d4b6c259f" containerName="registry-server" containerID="cri-o://d7666b3694247386c4dc70e43ea7d0792c23a9b9615bc76d788a15b3f6bbaa29" gracePeriod=2 Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.466655 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.650468 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-utilities\") pod \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\" (UID: \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\") " Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.650666 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-catalog-content\") pod \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\" (UID: \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\") " Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.650750 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9r7q\" (UniqueName: \"kubernetes.io/projected/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-kube-api-access-c9r7q\") pod \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\" (UID: \"4ec911d3-fc70-4003-9f02-bc8d4b6c259f\") " Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.651288 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-utilities" (OuterVolumeSpecName: "utilities") pod "4ec911d3-fc70-4003-9f02-bc8d4b6c259f" (UID: "4ec911d3-fc70-4003-9f02-bc8d4b6c259f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.652631 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.659687 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-kube-api-access-c9r7q" (OuterVolumeSpecName: "kube-api-access-c9r7q") pod "4ec911d3-fc70-4003-9f02-bc8d4b6c259f" (UID: "4ec911d3-fc70-4003-9f02-bc8d4b6c259f"). InnerVolumeSpecName "kube-api-access-c9r7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.754596 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9r7q\" (UniqueName: \"kubernetes.io/projected/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-kube-api-access-c9r7q\") on node \"crc\" DevicePath \"\"" Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.967722 4802 generic.go:334] "Generic (PLEG): container finished" podID="4ec911d3-fc70-4003-9f02-bc8d4b6c259f" containerID="d7666b3694247386c4dc70e43ea7d0792c23a9b9615bc76d788a15b3f6bbaa29" exitCode=0 Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.967797 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2dxk" event={"ID":"4ec911d3-fc70-4003-9f02-bc8d4b6c259f","Type":"ContainerDied","Data":"d7666b3694247386c4dc70e43ea7d0792c23a9b9615bc76d788a15b3f6bbaa29"} Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.968055 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2dxk" event={"ID":"4ec911d3-fc70-4003-9f02-bc8d4b6c259f","Type":"ContainerDied","Data":"b429ba21db739b3fc21c719f72e26a48f98afcf6f0c059e27e3f87a50a4ccd61"} Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.967847 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2dxk" Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.968077 4802 scope.go:117] "RemoveContainer" containerID="d7666b3694247386c4dc70e43ea7d0792c23a9b9615bc76d788a15b3f6bbaa29" Oct 04 05:46:00 crc kubenswrapper[4802]: I1004 05:46:00.993694 4802 scope.go:117] "RemoveContainer" containerID="aa59825719501831146b6f154dc4e4fafa4910a6bd02a7849ec46fa1374ba92f" Oct 04 05:46:01 crc kubenswrapper[4802]: I1004 05:46:01.028963 4802 scope.go:117] "RemoveContainer" containerID="3fbb71ca2acd1f849bca96a37d0b28c1042897ad6323af106ab522f5de96eeea" Oct 04 05:46:01 crc kubenswrapper[4802]: I1004 05:46:01.084598 4802 scope.go:117] "RemoveContainer" containerID="d7666b3694247386c4dc70e43ea7d0792c23a9b9615bc76d788a15b3f6bbaa29" Oct 04 05:46:01 crc kubenswrapper[4802]: E1004 05:46:01.085208 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7666b3694247386c4dc70e43ea7d0792c23a9b9615bc76d788a15b3f6bbaa29\": container with ID starting with d7666b3694247386c4dc70e43ea7d0792c23a9b9615bc76d788a15b3f6bbaa29 not found: ID does not exist" containerID="d7666b3694247386c4dc70e43ea7d0792c23a9b9615bc76d788a15b3f6bbaa29" Oct 04 05:46:01 crc kubenswrapper[4802]: I1004 05:46:01.085248 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7666b3694247386c4dc70e43ea7d0792c23a9b9615bc76d788a15b3f6bbaa29"} err="failed to get container status \"d7666b3694247386c4dc70e43ea7d0792c23a9b9615bc76d788a15b3f6bbaa29\": rpc error: code = NotFound desc = could not find container \"d7666b3694247386c4dc70e43ea7d0792c23a9b9615bc76d788a15b3f6bbaa29\": container with ID starting with d7666b3694247386c4dc70e43ea7d0792c23a9b9615bc76d788a15b3f6bbaa29 not found: ID does not exist" Oct 04 05:46:01 crc kubenswrapper[4802]: I1004 05:46:01.085276 4802 scope.go:117] "RemoveContainer" containerID="aa59825719501831146b6f154dc4e4fafa4910a6bd02a7849ec46fa1374ba92f" Oct 04 05:46:01 crc kubenswrapper[4802]: E1004 05:46:01.085725 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa59825719501831146b6f154dc4e4fafa4910a6bd02a7849ec46fa1374ba92f\": container with ID starting with aa59825719501831146b6f154dc4e4fafa4910a6bd02a7849ec46fa1374ba92f not found: ID does not exist" containerID="aa59825719501831146b6f154dc4e4fafa4910a6bd02a7849ec46fa1374ba92f" Oct 04 05:46:01 crc kubenswrapper[4802]: I1004 05:46:01.085755 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa59825719501831146b6f154dc4e4fafa4910a6bd02a7849ec46fa1374ba92f"} err="failed to get container status \"aa59825719501831146b6f154dc4e4fafa4910a6bd02a7849ec46fa1374ba92f\": rpc error: code = NotFound desc = could not find container \"aa59825719501831146b6f154dc4e4fafa4910a6bd02a7849ec46fa1374ba92f\": container with ID starting with aa59825719501831146b6f154dc4e4fafa4910a6bd02a7849ec46fa1374ba92f not found: ID does not exist" Oct 04 05:46:01 crc kubenswrapper[4802]: I1004 05:46:01.085777 4802 scope.go:117] "RemoveContainer" containerID="3fbb71ca2acd1f849bca96a37d0b28c1042897ad6323af106ab522f5de96eeea" Oct 04 05:46:01 crc kubenswrapper[4802]: E1004 05:46:01.086095 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbb71ca2acd1f849bca96a37d0b28c1042897ad6323af106ab522f5de96eeea\": container with ID starting with 3fbb71ca2acd1f849bca96a37d0b28c1042897ad6323af106ab522f5de96eeea not found: ID does not exist" containerID="3fbb71ca2acd1f849bca96a37d0b28c1042897ad6323af106ab522f5de96eeea" Oct 04 05:46:01 crc kubenswrapper[4802]: I1004 05:46:01.086143 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbb71ca2acd1f849bca96a37d0b28c1042897ad6323af106ab522f5de96eeea"} err="failed to get container status \"3fbb71ca2acd1f849bca96a37d0b28c1042897ad6323af106ab522f5de96eeea\": rpc error: code = NotFound desc = could not find container \"3fbb71ca2acd1f849bca96a37d0b28c1042897ad6323af106ab522f5de96eeea\": container with ID starting with 3fbb71ca2acd1f849bca96a37d0b28c1042897ad6323af106ab522f5de96eeea not found: ID does not exist" Oct 04 05:46:01 crc kubenswrapper[4802]: I1004 05:46:01.242938 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ec911d3-fc70-4003-9f02-bc8d4b6c259f" (UID: "4ec911d3-fc70-4003-9f02-bc8d4b6c259f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:46:01 crc kubenswrapper[4802]: I1004 05:46:01.264870 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec911d3-fc70-4003-9f02-bc8d4b6c259f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:46:01 crc kubenswrapper[4802]: I1004 05:46:01.300288 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s2dxk"] Oct 04 05:46:01 crc kubenswrapper[4802]: I1004 05:46:01.310294 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s2dxk"] Oct 04 05:46:02 crc kubenswrapper[4802]: I1004 05:46:02.371333 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ec911d3-fc70-4003-9f02-bc8d4b6c259f" path="/var/lib/kubelet/pods/4ec911d3-fc70-4003-9f02-bc8d4b6c259f/volumes" Oct 04 05:46:22 crc kubenswrapper[4802]: I1004 05:46:22.662359 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:46:22 crc kubenswrapper[4802]: I1004 05:46:22.662971 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:46:22 crc kubenswrapper[4802]: I1004 05:46:22.663030 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 05:46:22 crc kubenswrapper[4802]: I1004 05:46:22.664817 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2460e6a2a76c8e7cf39651cc3e284f1c1f8c2f793fcb926c2145f17ff566d459"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 05:46:22 crc kubenswrapper[4802]: I1004 05:46:22.664889 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://2460e6a2a76c8e7cf39651cc3e284f1c1f8c2f793fcb926c2145f17ff566d459" gracePeriod=600 Oct 04 05:46:23 crc kubenswrapper[4802]: I1004 05:46:23.220667 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="2460e6a2a76c8e7cf39651cc3e284f1c1f8c2f793fcb926c2145f17ff566d459" exitCode=0 Oct 04 05:46:23 crc kubenswrapper[4802]: I1004 05:46:23.220755 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"2460e6a2a76c8e7cf39651cc3e284f1c1f8c2f793fcb926c2145f17ff566d459"} Oct 04 05:46:23 crc kubenswrapper[4802]: I1004 05:46:23.220986 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509"} Oct 04 05:46:23 crc kubenswrapper[4802]: I1004 05:46:23.221014 4802 scope.go:117] "RemoveContainer" containerID="878b310eb9bcdd614a5039cae4dda2d4523e05d601ad286e5c38b80645b75f76" Oct 04 05:48:39 crc kubenswrapper[4802]: I1004 05:48:39.728514 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-qmks7" Oct 04 05:48:52 crc kubenswrapper[4802]: I1004 05:48:52.662963 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:48:52 crc kubenswrapper[4802]: I1004 05:48:52.663570 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:49:22 crc kubenswrapper[4802]: I1004 05:49:22.662687 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:49:22 crc kubenswrapper[4802]: I1004 05:49:22.663193 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:49:52 crc kubenswrapper[4802]: I1004 05:49:52.662909 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:49:52 crc kubenswrapper[4802]: I1004 05:49:52.663523 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:49:52 crc kubenswrapper[4802]: I1004 05:49:52.663676 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 05:49:52 crc kubenswrapper[4802]: I1004 05:49:52.664677 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 05:49:52 crc kubenswrapper[4802]: I1004 05:49:52.664773 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" gracePeriod=600 Oct 04 05:49:52 crc kubenswrapper[4802]: E1004 05:49:52.810241 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:49:53 crc kubenswrapper[4802]: I1004 05:49:53.225938 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" exitCode=0 Oct 04 05:49:53 crc kubenswrapper[4802]: I1004 05:49:53.225993 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509"} Oct 04 05:49:53 crc kubenswrapper[4802]: I1004 05:49:53.226039 4802 scope.go:117] "RemoveContainer" containerID="2460e6a2a76c8e7cf39651cc3e284f1c1f8c2f793fcb926c2145f17ff566d459" Oct 04 05:49:53 crc kubenswrapper[4802]: I1004 05:49:53.226775 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:49:53 crc kubenswrapper[4802]: E1004 05:49:53.227110 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:50:06 crc kubenswrapper[4802]: I1004 05:50:06.360325 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:50:06 crc kubenswrapper[4802]: E1004 05:50:06.361107 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:50:17 crc kubenswrapper[4802]: I1004 05:50:17.359448 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:50:17 crc kubenswrapper[4802]: E1004 05:50:17.360343 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:50:32 crc kubenswrapper[4802]: I1004 05:50:32.368734 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:50:32 crc kubenswrapper[4802]: E1004 05:50:32.369761 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:50:45 crc kubenswrapper[4802]: I1004 05:50:45.359928 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:50:45 crc kubenswrapper[4802]: E1004 05:50:45.360618 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:51:00 crc kubenswrapper[4802]: I1004 05:51:00.359996 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:51:00 crc kubenswrapper[4802]: E1004 05:51:00.361880 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:51:15 crc kubenswrapper[4802]: I1004 05:51:15.360531 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:51:15 crc kubenswrapper[4802]: E1004 05:51:15.361507 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:51:30 crc kubenswrapper[4802]: I1004 05:51:30.360060 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:51:30 crc kubenswrapper[4802]: E1004 05:51:30.360900 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.720592 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q6srr"] Oct 04 05:51:35 crc kubenswrapper[4802]: E1004 05:51:35.721723 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec911d3-fc70-4003-9f02-bc8d4b6c259f" containerName="extract-content" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.721746 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec911d3-fc70-4003-9f02-bc8d4b6c259f" containerName="extract-content" Oct 04 05:51:35 crc kubenswrapper[4802]: E1004 05:51:35.721762 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec911d3-fc70-4003-9f02-bc8d4b6c259f" containerName="registry-server" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.721772 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec911d3-fc70-4003-9f02-bc8d4b6c259f" containerName="registry-server" Oct 04 05:51:35 crc kubenswrapper[4802]: E1004 05:51:35.721809 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec911d3-fc70-4003-9f02-bc8d4b6c259f" containerName="extract-utilities" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.721821 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec911d3-fc70-4003-9f02-bc8d4b6c259f" containerName="extract-utilities" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.722093 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec911d3-fc70-4003-9f02-bc8d4b6c259f" containerName="registry-server" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.723829 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.731219 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q6srr"] Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.871090 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mss8l\" (UniqueName: \"kubernetes.io/projected/c43a47cd-6dcf-4efe-a101-007336624a31-kube-api-access-mss8l\") pod \"redhat-operators-q6srr\" (UID: \"c43a47cd-6dcf-4efe-a101-007336624a31\") " pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.871143 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c43a47cd-6dcf-4efe-a101-007336624a31-utilities\") pod \"redhat-operators-q6srr\" (UID: \"c43a47cd-6dcf-4efe-a101-007336624a31\") " pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.871248 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c43a47cd-6dcf-4efe-a101-007336624a31-catalog-content\") pod \"redhat-operators-q6srr\" (UID: \"c43a47cd-6dcf-4efe-a101-007336624a31\") " pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.973212 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mss8l\" (UniqueName: \"kubernetes.io/projected/c43a47cd-6dcf-4efe-a101-007336624a31-kube-api-access-mss8l\") pod \"redhat-operators-q6srr\" (UID: \"c43a47cd-6dcf-4efe-a101-007336624a31\") " pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.973248 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c43a47cd-6dcf-4efe-a101-007336624a31-utilities\") pod \"redhat-operators-q6srr\" (UID: \"c43a47cd-6dcf-4efe-a101-007336624a31\") " pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.973322 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c43a47cd-6dcf-4efe-a101-007336624a31-catalog-content\") pod \"redhat-operators-q6srr\" (UID: \"c43a47cd-6dcf-4efe-a101-007336624a31\") " pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.973831 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c43a47cd-6dcf-4efe-a101-007336624a31-catalog-content\") pod \"redhat-operators-q6srr\" (UID: \"c43a47cd-6dcf-4efe-a101-007336624a31\") " pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:35 crc kubenswrapper[4802]: I1004 05:51:35.973858 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c43a47cd-6dcf-4efe-a101-007336624a31-utilities\") pod \"redhat-operators-q6srr\" (UID: \"c43a47cd-6dcf-4efe-a101-007336624a31\") " pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:36 crc kubenswrapper[4802]: I1004 05:51:36.004416 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mss8l\" (UniqueName: \"kubernetes.io/projected/c43a47cd-6dcf-4efe-a101-007336624a31-kube-api-access-mss8l\") pod \"redhat-operators-q6srr\" (UID: \"c43a47cd-6dcf-4efe-a101-007336624a31\") " pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:36 crc kubenswrapper[4802]: I1004 05:51:36.055006 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:36 crc kubenswrapper[4802]: I1004 05:51:36.520607 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q6srr"] Oct 04 05:51:37 crc kubenswrapper[4802]: I1004 05:51:37.186023 4802 generic.go:334] "Generic (PLEG): container finished" podID="c43a47cd-6dcf-4efe-a101-007336624a31" containerID="309f714fdcc72e3cc1d62c5d21877f3d6bbaf89ba5b4f38cf8cb69789c393d06" exitCode=0 Oct 04 05:51:37 crc kubenswrapper[4802]: I1004 05:51:37.186071 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6srr" event={"ID":"c43a47cd-6dcf-4efe-a101-007336624a31","Type":"ContainerDied","Data":"309f714fdcc72e3cc1d62c5d21877f3d6bbaf89ba5b4f38cf8cb69789c393d06"} Oct 04 05:51:37 crc kubenswrapper[4802]: I1004 05:51:37.186334 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6srr" event={"ID":"c43a47cd-6dcf-4efe-a101-007336624a31","Type":"ContainerStarted","Data":"af77baf8614fe0ed8cdc1165addbc9f7b044dd074864f7ab41f726e6aab14e52"} Oct 04 05:51:37 crc kubenswrapper[4802]: I1004 05:51:37.189552 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 05:51:39 crc kubenswrapper[4802]: I1004 05:51:39.222968 4802 generic.go:334] "Generic (PLEG): container finished" podID="c43a47cd-6dcf-4efe-a101-007336624a31" containerID="5a25909167ad37d8f18ff81215420ab23f03a8d357e74f6079d82e7ee3ff0d31" exitCode=0 Oct 04 05:51:39 crc kubenswrapper[4802]: I1004 05:51:39.223064 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6srr" event={"ID":"c43a47cd-6dcf-4efe-a101-007336624a31","Type":"ContainerDied","Data":"5a25909167ad37d8f18ff81215420ab23f03a8d357e74f6079d82e7ee3ff0d31"} Oct 04 05:51:41 crc kubenswrapper[4802]: I1004 05:51:41.245928 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6srr" event={"ID":"c43a47cd-6dcf-4efe-a101-007336624a31","Type":"ContainerStarted","Data":"d0298666af9858e99865e567eea0cebaf3e5233e05be64916b50ae67d53e2146"} Oct 04 05:51:41 crc kubenswrapper[4802]: I1004 05:51:41.263867 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q6srr" podStartSLOduration=2.7387701 podStartE2EDuration="6.263848354s" podCreationTimestamp="2025-10-04 05:51:35 +0000 UTC" firstStartedPulling="2025-10-04 05:51:37.189289349 +0000 UTC m=+3939.597289974" lastFinishedPulling="2025-10-04 05:51:40.714367603 +0000 UTC m=+3943.122368228" observedRunningTime="2025-10-04 05:51:41.260636142 +0000 UTC m=+3943.668636777" watchObservedRunningTime="2025-10-04 05:51:41.263848354 +0000 UTC m=+3943.671848999" Oct 04 05:51:45 crc kubenswrapper[4802]: I1004 05:51:45.360017 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:51:45 crc kubenswrapper[4802]: E1004 05:51:45.361692 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:51:46 crc kubenswrapper[4802]: I1004 05:51:46.055141 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:46 crc kubenswrapper[4802]: I1004 05:51:46.055192 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:47 crc kubenswrapper[4802]: I1004 05:51:47.098334 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q6srr" podUID="c43a47cd-6dcf-4efe-a101-007336624a31" containerName="registry-server" probeResult="failure" output=< Oct 04 05:51:47 crc kubenswrapper[4802]: timeout: failed to connect service ":50051" within 1s Oct 04 05:51:47 crc kubenswrapper[4802]: > Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.177114 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k9xq2"] Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.180033 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.191077 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9xq2"] Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.267241 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrftd\" (UniqueName: \"kubernetes.io/projected/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-kube-api-access-hrftd\") pod \"redhat-marketplace-k9xq2\" (UID: \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\") " pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.267413 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-catalog-content\") pod \"redhat-marketplace-k9xq2\" (UID: \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\") " pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.267444 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-utilities\") pod \"redhat-marketplace-k9xq2\" (UID: \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\") " pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.368937 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrftd\" (UniqueName: \"kubernetes.io/projected/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-kube-api-access-hrftd\") pod \"redhat-marketplace-k9xq2\" (UID: \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\") " pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.369160 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-catalog-content\") pod \"redhat-marketplace-k9xq2\" (UID: \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\") " pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.369203 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-utilities\") pod \"redhat-marketplace-k9xq2\" (UID: \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\") " pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.369903 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-catalog-content\") pod \"redhat-marketplace-k9xq2\" (UID: \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\") " pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.369919 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-utilities\") pod \"redhat-marketplace-k9xq2\" (UID: \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\") " pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.390449 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrftd\" (UniqueName: \"kubernetes.io/projected/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-kube-api-access-hrftd\") pod \"redhat-marketplace-k9xq2\" (UID: \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\") " pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.501489 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:51:50 crc kubenswrapper[4802]: I1004 05:51:50.995246 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9xq2"] Oct 04 05:51:51 crc kubenswrapper[4802]: I1004 05:51:51.356179 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9xq2" event={"ID":"7a6e0f09-d07d-4a33-9442-4c7de36a30d3","Type":"ContainerStarted","Data":"83faa11c785b12d383628676ff74d7fc12cafb319b5c1db034b75dbd1fbf638c"} Oct 04 05:51:51 crc kubenswrapper[4802]: E1004 05:51:51.856300 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a6e0f09_d07d_4a33_9442_4c7de36a30d3.slice/crio-ced589c881547ba2e48b1be8dc5f865e82d41dc679bfa4a09d788930b713a1e1.scope\": RecentStats: unable to find data in memory cache]" Oct 04 05:51:52 crc kubenswrapper[4802]: I1004 05:51:52.368867 4802 generic.go:334] "Generic (PLEG): container finished" podID="7a6e0f09-d07d-4a33-9442-4c7de36a30d3" containerID="ced589c881547ba2e48b1be8dc5f865e82d41dc679bfa4a09d788930b713a1e1" exitCode=0 Oct 04 05:51:52 crc kubenswrapper[4802]: I1004 05:51:52.372486 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9xq2" event={"ID":"7a6e0f09-d07d-4a33-9442-4c7de36a30d3","Type":"ContainerDied","Data":"ced589c881547ba2e48b1be8dc5f865e82d41dc679bfa4a09d788930b713a1e1"} Oct 04 05:51:55 crc kubenswrapper[4802]: I1004 05:51:55.397972 4802 generic.go:334] "Generic (PLEG): container finished" podID="7a6e0f09-d07d-4a33-9442-4c7de36a30d3" containerID="2f93f53188fbfb076062711bac96cc577f8269b0ce2632d3f709913e53a238b9" exitCode=0 Oct 04 05:51:55 crc kubenswrapper[4802]: I1004 05:51:55.398033 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9xq2" event={"ID":"7a6e0f09-d07d-4a33-9442-4c7de36a30d3","Type":"ContainerDied","Data":"2f93f53188fbfb076062711bac96cc577f8269b0ce2632d3f709913e53a238b9"} Oct 04 05:51:56 crc kubenswrapper[4802]: I1004 05:51:56.125946 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:56 crc kubenswrapper[4802]: I1004 05:51:56.186613 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:51:59 crc kubenswrapper[4802]: I1004 05:51:59.368183 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q6srr"] Oct 04 05:51:59 crc kubenswrapper[4802]: I1004 05:51:59.368935 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q6srr" podUID="c43a47cd-6dcf-4efe-a101-007336624a31" containerName="registry-server" containerID="cri-o://d0298666af9858e99865e567eea0cebaf3e5233e05be64916b50ae67d53e2146" gracePeriod=2 Oct 04 05:52:00 crc kubenswrapper[4802]: I1004 05:52:00.040867 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-cps8q"] Oct 04 05:52:00 crc kubenswrapper[4802]: I1004 05:52:00.052307 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-cps8q"] Oct 04 05:52:00 crc kubenswrapper[4802]: I1004 05:52:00.359578 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:52:00 crc kubenswrapper[4802]: E1004 05:52:00.359969 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:52:00 crc kubenswrapper[4802]: I1004 05:52:00.377557 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="516ec0fa-3ee1-4110-82c3-2f6b480671e0" path="/var/lib/kubelet/pods/516ec0fa-3ee1-4110-82c3-2f6b480671e0/volumes" Oct 04 05:52:00 crc kubenswrapper[4802]: I1004 05:52:00.440262 4802 generic.go:334] "Generic (PLEG): container finished" podID="c43a47cd-6dcf-4efe-a101-007336624a31" containerID="d0298666af9858e99865e567eea0cebaf3e5233e05be64916b50ae67d53e2146" exitCode=0 Oct 04 05:52:00 crc kubenswrapper[4802]: I1004 05:52:00.440502 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6srr" event={"ID":"c43a47cd-6dcf-4efe-a101-007336624a31","Type":"ContainerDied","Data":"d0298666af9858e99865e567eea0cebaf3e5233e05be64916b50ae67d53e2146"} Oct 04 05:52:00 crc kubenswrapper[4802]: I1004 05:52:00.902571 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:52:00 crc kubenswrapper[4802]: I1004 05:52:00.991097 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mss8l\" (UniqueName: \"kubernetes.io/projected/c43a47cd-6dcf-4efe-a101-007336624a31-kube-api-access-mss8l\") pod \"c43a47cd-6dcf-4efe-a101-007336624a31\" (UID: \"c43a47cd-6dcf-4efe-a101-007336624a31\") " Oct 04 05:52:00 crc kubenswrapper[4802]: I1004 05:52:00.991356 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c43a47cd-6dcf-4efe-a101-007336624a31-utilities\") pod \"c43a47cd-6dcf-4efe-a101-007336624a31\" (UID: \"c43a47cd-6dcf-4efe-a101-007336624a31\") " Oct 04 05:52:00 crc kubenswrapper[4802]: I1004 05:52:00.991745 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c43a47cd-6dcf-4efe-a101-007336624a31-catalog-content\") pod \"c43a47cd-6dcf-4efe-a101-007336624a31\" (UID: \"c43a47cd-6dcf-4efe-a101-007336624a31\") " Oct 04 05:52:00 crc kubenswrapper[4802]: I1004 05:52:00.994110 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c43a47cd-6dcf-4efe-a101-007336624a31-utilities" (OuterVolumeSpecName: "utilities") pod "c43a47cd-6dcf-4efe-a101-007336624a31" (UID: "c43a47cd-6dcf-4efe-a101-007336624a31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.005672 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43a47cd-6dcf-4efe-a101-007336624a31-kube-api-access-mss8l" (OuterVolumeSpecName: "kube-api-access-mss8l") pod "c43a47cd-6dcf-4efe-a101-007336624a31" (UID: "c43a47cd-6dcf-4efe-a101-007336624a31"). InnerVolumeSpecName "kube-api-access-mss8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.076739 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c43a47cd-6dcf-4efe-a101-007336624a31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c43a47cd-6dcf-4efe-a101-007336624a31" (UID: "c43a47cd-6dcf-4efe-a101-007336624a31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.094581 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c43a47cd-6dcf-4efe-a101-007336624a31-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.094831 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mss8l\" (UniqueName: \"kubernetes.io/projected/c43a47cd-6dcf-4efe-a101-007336624a31-kube-api-access-mss8l\") on node \"crc\" DevicePath \"\"" Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.094903 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c43a47cd-6dcf-4efe-a101-007336624a31-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.453952 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9xq2" event={"ID":"7a6e0f09-d07d-4a33-9442-4c7de36a30d3","Type":"ContainerStarted","Data":"f25580acc029bf38685ccb929c72d056af5b0ab13f55e6bc1557b91036bb84cf"} Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.458483 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6srr" event={"ID":"c43a47cd-6dcf-4efe-a101-007336624a31","Type":"ContainerDied","Data":"af77baf8614fe0ed8cdc1165addbc9f7b044dd074864f7ab41f726e6aab14e52"} Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.458694 4802 scope.go:117] "RemoveContainer" containerID="d0298666af9858e99865e567eea0cebaf3e5233e05be64916b50ae67d53e2146" Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.458529 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6srr" Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.481865 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k9xq2" podStartSLOduration=3.468858261 podStartE2EDuration="11.481848272s" podCreationTimestamp="2025-10-04 05:51:50 +0000 UTC" firstStartedPulling="2025-10-04 05:51:52.371014284 +0000 UTC m=+3954.779014919" lastFinishedPulling="2025-10-04 05:52:00.384004305 +0000 UTC m=+3962.792004930" observedRunningTime="2025-10-04 05:52:01.480427812 +0000 UTC m=+3963.888428437" watchObservedRunningTime="2025-10-04 05:52:01.481848272 +0000 UTC m=+3963.889848897" Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.508000 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q6srr"] Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.516861 4802 scope.go:117] "RemoveContainer" containerID="5a25909167ad37d8f18ff81215420ab23f03a8d357e74f6079d82e7ee3ff0d31" Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.523500 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q6srr"] Oct 04 05:52:01 crc kubenswrapper[4802]: I1004 05:52:01.545305 4802 scope.go:117] "RemoveContainer" containerID="309f714fdcc72e3cc1d62c5d21877f3d6bbaf89ba5b4f38cf8cb69789c393d06" Oct 04 05:52:02 crc kubenswrapper[4802]: I1004 05:52:02.372022 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c43a47cd-6dcf-4efe-a101-007336624a31" path="/var/lib/kubelet/pods/c43a47cd-6dcf-4efe-a101-007336624a31/volumes" Oct 04 05:52:10 crc kubenswrapper[4802]: I1004 05:52:10.501596 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:52:10 crc kubenswrapper[4802]: I1004 05:52:10.502220 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:52:10 crc kubenswrapper[4802]: I1004 05:52:10.558799 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:52:10 crc kubenswrapper[4802]: I1004 05:52:10.617941 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:52:10 crc kubenswrapper[4802]: I1004 05:52:10.793704 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9xq2"] Oct 04 05:52:12 crc kubenswrapper[4802]: I1004 05:52:12.359924 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:52:12 crc kubenswrapper[4802]: E1004 05:52:12.360554 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:52:12 crc kubenswrapper[4802]: I1004 05:52:12.566378 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k9xq2" podUID="7a6e0f09-d07d-4a33-9442-4c7de36a30d3" containerName="registry-server" containerID="cri-o://f25580acc029bf38685ccb929c72d056af5b0ab13f55e6bc1557b91036bb84cf" gracePeriod=2 Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.146790 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.264317 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-utilities\") pod \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\" (UID: \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\") " Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.264894 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-catalog-content\") pod \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\" (UID: \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\") " Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.264975 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrftd\" (UniqueName: \"kubernetes.io/projected/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-kube-api-access-hrftd\") pod \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\" (UID: \"7a6e0f09-d07d-4a33-9442-4c7de36a30d3\") " Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.265572 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-utilities" (OuterVolumeSpecName: "utilities") pod "7a6e0f09-d07d-4a33-9442-4c7de36a30d3" (UID: "7a6e0f09-d07d-4a33-9442-4c7de36a30d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.277847 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a6e0f09-d07d-4a33-9442-4c7de36a30d3" (UID: "7a6e0f09-d07d-4a33-9442-4c7de36a30d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.286745 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-kube-api-access-hrftd" (OuterVolumeSpecName: "kube-api-access-hrftd") pod "7a6e0f09-d07d-4a33-9442-4c7de36a30d3" (UID: "7a6e0f09-d07d-4a33-9442-4c7de36a30d3"). InnerVolumeSpecName "kube-api-access-hrftd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.366962 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.366993 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrftd\" (UniqueName: \"kubernetes.io/projected/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-kube-api-access-hrftd\") on node \"crc\" DevicePath \"\"" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.367009 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6e0f09-d07d-4a33-9442-4c7de36a30d3-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.596366 4802 generic.go:334] "Generic (PLEG): container finished" podID="7a6e0f09-d07d-4a33-9442-4c7de36a30d3" containerID="f25580acc029bf38685ccb929c72d056af5b0ab13f55e6bc1557b91036bb84cf" exitCode=0 Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.596423 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9xq2" event={"ID":"7a6e0f09-d07d-4a33-9442-4c7de36a30d3","Type":"ContainerDied","Data":"f25580acc029bf38685ccb929c72d056af5b0ab13f55e6bc1557b91036bb84cf"} Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.596466 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9xq2" event={"ID":"7a6e0f09-d07d-4a33-9442-4c7de36a30d3","Type":"ContainerDied","Data":"83faa11c785b12d383628676ff74d7fc12cafb319b5c1db034b75dbd1fbf638c"} Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.596484 4802 scope.go:117] "RemoveContainer" containerID="f25580acc029bf38685ccb929c72d056af5b0ab13f55e6bc1557b91036bb84cf" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.596694 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9xq2" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.630155 4802 scope.go:117] "RemoveContainer" containerID="2f93f53188fbfb076062711bac96cc577f8269b0ce2632d3f709913e53a238b9" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.637254 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9xq2"] Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.645482 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9xq2"] Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.662854 4802 scope.go:117] "RemoveContainer" containerID="ced589c881547ba2e48b1be8dc5f865e82d41dc679bfa4a09d788930b713a1e1" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.696053 4802 scope.go:117] "RemoveContainer" containerID="f25580acc029bf38685ccb929c72d056af5b0ab13f55e6bc1557b91036bb84cf" Oct 04 05:52:13 crc kubenswrapper[4802]: E1004 05:52:13.696447 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f25580acc029bf38685ccb929c72d056af5b0ab13f55e6bc1557b91036bb84cf\": container with ID starting with f25580acc029bf38685ccb929c72d056af5b0ab13f55e6bc1557b91036bb84cf not found: ID does not exist" containerID="f25580acc029bf38685ccb929c72d056af5b0ab13f55e6bc1557b91036bb84cf" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.696488 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f25580acc029bf38685ccb929c72d056af5b0ab13f55e6bc1557b91036bb84cf"} err="failed to get container status \"f25580acc029bf38685ccb929c72d056af5b0ab13f55e6bc1557b91036bb84cf\": rpc error: code = NotFound desc = could not find container \"f25580acc029bf38685ccb929c72d056af5b0ab13f55e6bc1557b91036bb84cf\": container with ID starting with f25580acc029bf38685ccb929c72d056af5b0ab13f55e6bc1557b91036bb84cf not found: ID does not exist" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.696515 4802 scope.go:117] "RemoveContainer" containerID="2f93f53188fbfb076062711bac96cc577f8269b0ce2632d3f709913e53a238b9" Oct 04 05:52:13 crc kubenswrapper[4802]: E1004 05:52:13.696910 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f93f53188fbfb076062711bac96cc577f8269b0ce2632d3f709913e53a238b9\": container with ID starting with 2f93f53188fbfb076062711bac96cc577f8269b0ce2632d3f709913e53a238b9 not found: ID does not exist" containerID="2f93f53188fbfb076062711bac96cc577f8269b0ce2632d3f709913e53a238b9" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.696942 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f93f53188fbfb076062711bac96cc577f8269b0ce2632d3f709913e53a238b9"} err="failed to get container status \"2f93f53188fbfb076062711bac96cc577f8269b0ce2632d3f709913e53a238b9\": rpc error: code = NotFound desc = could not find container \"2f93f53188fbfb076062711bac96cc577f8269b0ce2632d3f709913e53a238b9\": container with ID starting with 2f93f53188fbfb076062711bac96cc577f8269b0ce2632d3f709913e53a238b9 not found: ID does not exist" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.696963 4802 scope.go:117] "RemoveContainer" containerID="ced589c881547ba2e48b1be8dc5f865e82d41dc679bfa4a09d788930b713a1e1" Oct 04 05:52:13 crc kubenswrapper[4802]: E1004 05:52:13.697550 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced589c881547ba2e48b1be8dc5f865e82d41dc679bfa4a09d788930b713a1e1\": container with ID starting with ced589c881547ba2e48b1be8dc5f865e82d41dc679bfa4a09d788930b713a1e1 not found: ID does not exist" containerID="ced589c881547ba2e48b1be8dc5f865e82d41dc679bfa4a09d788930b713a1e1" Oct 04 05:52:13 crc kubenswrapper[4802]: I1004 05:52:13.697596 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced589c881547ba2e48b1be8dc5f865e82d41dc679bfa4a09d788930b713a1e1"} err="failed to get container status \"ced589c881547ba2e48b1be8dc5f865e82d41dc679bfa4a09d788930b713a1e1\": rpc error: code = NotFound desc = could not find container \"ced589c881547ba2e48b1be8dc5f865e82d41dc679bfa4a09d788930b713a1e1\": container with ID starting with ced589c881547ba2e48b1be8dc5f865e82d41dc679bfa4a09d788930b713a1e1 not found: ID does not exist" Oct 04 05:52:14 crc kubenswrapper[4802]: I1004 05:52:14.370747 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a6e0f09-d07d-4a33-9442-4c7de36a30d3" path="/var/lib/kubelet/pods/7a6e0f09-d07d-4a33-9442-4c7de36a30d3/volumes" Oct 04 05:52:15 crc kubenswrapper[4802]: I1004 05:52:15.345377 4802 scope.go:117] "RemoveContainer" containerID="fa86f81162c63aaa7829ad4a5a931b123c7a0503266476d50af44dc34703817d" Oct 04 05:52:19 crc kubenswrapper[4802]: I1004 05:52:19.041732 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-ae60-account-create-7bfx8"] Oct 04 05:52:19 crc kubenswrapper[4802]: I1004 05:52:19.050658 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-ae60-account-create-7bfx8"] Oct 04 05:52:20 crc kubenswrapper[4802]: I1004 05:52:20.375529 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9ce204-85cc-4eed-9ca0-9c6c867786ea" path="/var/lib/kubelet/pods/9b9ce204-85cc-4eed-9ca0-9c6c867786ea/volumes" Oct 04 05:52:23 crc kubenswrapper[4802]: I1004 05:52:23.359848 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:52:23 crc kubenswrapper[4802]: E1004 05:52:23.360564 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:52:36 crc kubenswrapper[4802]: I1004 05:52:36.360171 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:52:36 crc kubenswrapper[4802]: E1004 05:52:36.361237 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:52:43 crc kubenswrapper[4802]: I1004 05:52:43.045783 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-q7cpd"] Oct 04 05:52:43 crc kubenswrapper[4802]: I1004 05:52:43.057330 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-q7cpd"] Oct 04 05:52:44 crc kubenswrapper[4802]: I1004 05:52:44.369587 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe45cab7-328c-41b5-8b99-fdc57a6c3727" path="/var/lib/kubelet/pods/fe45cab7-328c-41b5-8b99-fdc57a6c3727/volumes" Oct 04 05:52:50 crc kubenswrapper[4802]: I1004 05:52:50.365443 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:52:50 crc kubenswrapper[4802]: E1004 05:52:50.366554 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:53:03 crc kubenswrapper[4802]: I1004 05:53:03.360454 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:53:03 crc kubenswrapper[4802]: E1004 05:53:03.361367 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:53:15 crc kubenswrapper[4802]: I1004 05:53:15.425779 4802 scope.go:117] "RemoveContainer" containerID="3a76aff782f46578935394bb3e691f8e1f46db6470201e7aff2f13d04c468a53" Oct 04 05:53:15 crc kubenswrapper[4802]: I1004 05:53:15.467598 4802 scope.go:117] "RemoveContainer" containerID="a159a6d85a674781afb01a384880f47ce171d858f69f1145a3722dc0b2520271" Oct 04 05:53:18 crc kubenswrapper[4802]: I1004 05:53:18.368759 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:53:18 crc kubenswrapper[4802]: E1004 05:53:18.369509 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:53:29 crc kubenswrapper[4802]: I1004 05:53:29.362558 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:53:29 crc kubenswrapper[4802]: E1004 05:53:29.364231 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:53:41 crc kubenswrapper[4802]: I1004 05:53:41.360337 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:53:41 crc kubenswrapper[4802]: E1004 05:53:41.371154 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:53:56 crc kubenswrapper[4802]: I1004 05:53:56.360019 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:53:56 crc kubenswrapper[4802]: E1004 05:53:56.360886 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:54:08 crc kubenswrapper[4802]: I1004 05:54:08.390138 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:54:08 crc kubenswrapper[4802]: E1004 05:54:08.390871 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:54:20 crc kubenswrapper[4802]: I1004 05:54:20.360512 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:54:20 crc kubenswrapper[4802]: E1004 05:54:20.361539 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:54:35 crc kubenswrapper[4802]: I1004 05:54:35.360947 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:54:35 crc kubenswrapper[4802]: E1004 05:54:35.361768 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:54:46 crc kubenswrapper[4802]: I1004 05:54:46.360001 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:54:46 crc kubenswrapper[4802]: E1004 05:54:46.360992 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 05:54:57 crc kubenswrapper[4802]: I1004 05:54:57.361745 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:54:58 crc kubenswrapper[4802]: I1004 05:54:58.176131 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"b9c7b065670f04eae005b3fb1e676466d6a655377b55aa24d3b7a7403f013898"} Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.069815 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-crwfg"] Oct 04 05:57:03 crc kubenswrapper[4802]: E1004 05:57:03.072050 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43a47cd-6dcf-4efe-a101-007336624a31" containerName="extract-content" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.072162 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43a47cd-6dcf-4efe-a101-007336624a31" containerName="extract-content" Oct 04 05:57:03 crc kubenswrapper[4802]: E1004 05:57:03.072255 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a6e0f09-d07d-4a33-9442-4c7de36a30d3" containerName="extract-utilities" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.072333 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6e0f09-d07d-4a33-9442-4c7de36a30d3" containerName="extract-utilities" Oct 04 05:57:03 crc kubenswrapper[4802]: E1004 05:57:03.072416 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a6e0f09-d07d-4a33-9442-4c7de36a30d3" containerName="extract-content" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.072488 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6e0f09-d07d-4a33-9442-4c7de36a30d3" containerName="extract-content" Oct 04 05:57:03 crc kubenswrapper[4802]: E1004 05:57:03.072576 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43a47cd-6dcf-4efe-a101-007336624a31" containerName="registry-server" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.072672 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43a47cd-6dcf-4efe-a101-007336624a31" containerName="registry-server" Oct 04 05:57:03 crc kubenswrapper[4802]: E1004 05:57:03.072770 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43a47cd-6dcf-4efe-a101-007336624a31" containerName="extract-utilities" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.072859 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43a47cd-6dcf-4efe-a101-007336624a31" containerName="extract-utilities" Oct 04 05:57:03 crc kubenswrapper[4802]: E1004 05:57:03.072945 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a6e0f09-d07d-4a33-9442-4c7de36a30d3" containerName="registry-server" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.073029 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6e0f09-d07d-4a33-9442-4c7de36a30d3" containerName="registry-server" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.073326 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a6e0f09-d07d-4a33-9442-4c7de36a30d3" containerName="registry-server" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.073427 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="c43a47cd-6dcf-4efe-a101-007336624a31" containerName="registry-server" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.078408 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.089331 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crwfg"] Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.159399 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9417e3fe-96e5-47d5-adfa-630a7182bc53-catalog-content\") pod \"certified-operators-crwfg\" (UID: \"9417e3fe-96e5-47d5-adfa-630a7182bc53\") " pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.159497 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9417e3fe-96e5-47d5-adfa-630a7182bc53-utilities\") pod \"certified-operators-crwfg\" (UID: \"9417e3fe-96e5-47d5-adfa-630a7182bc53\") " pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.159893 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5j24\" (UniqueName: \"kubernetes.io/projected/9417e3fe-96e5-47d5-adfa-630a7182bc53-kube-api-access-r5j24\") pod \"certified-operators-crwfg\" (UID: \"9417e3fe-96e5-47d5-adfa-630a7182bc53\") " pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.261848 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9417e3fe-96e5-47d5-adfa-630a7182bc53-catalog-content\") pod \"certified-operators-crwfg\" (UID: \"9417e3fe-96e5-47d5-adfa-630a7182bc53\") " pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.262226 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9417e3fe-96e5-47d5-adfa-630a7182bc53-utilities\") pod \"certified-operators-crwfg\" (UID: \"9417e3fe-96e5-47d5-adfa-630a7182bc53\") " pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.262315 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5j24\" (UniqueName: \"kubernetes.io/projected/9417e3fe-96e5-47d5-adfa-630a7182bc53-kube-api-access-r5j24\") pod \"certified-operators-crwfg\" (UID: \"9417e3fe-96e5-47d5-adfa-630a7182bc53\") " pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.262381 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9417e3fe-96e5-47d5-adfa-630a7182bc53-catalog-content\") pod \"certified-operators-crwfg\" (UID: \"9417e3fe-96e5-47d5-adfa-630a7182bc53\") " pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.262529 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9417e3fe-96e5-47d5-adfa-630a7182bc53-utilities\") pod \"certified-operators-crwfg\" (UID: \"9417e3fe-96e5-47d5-adfa-630a7182bc53\") " pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.289245 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5j24\" (UniqueName: \"kubernetes.io/projected/9417e3fe-96e5-47d5-adfa-630a7182bc53-kube-api-access-r5j24\") pod \"certified-operators-crwfg\" (UID: \"9417e3fe-96e5-47d5-adfa-630a7182bc53\") " pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.406922 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:03 crc kubenswrapper[4802]: I1004 05:57:03.980440 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crwfg"] Oct 04 05:57:04 crc kubenswrapper[4802]: I1004 05:57:04.311447 4802 generic.go:334] "Generic (PLEG): container finished" podID="9417e3fe-96e5-47d5-adfa-630a7182bc53" containerID="c2b3c1829a42b0aba5ac0da7265799e3b8331a3e64a14b137527b50011721130" exitCode=0 Oct 04 05:57:04 crc kubenswrapper[4802]: I1004 05:57:04.311491 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crwfg" event={"ID":"9417e3fe-96e5-47d5-adfa-630a7182bc53","Type":"ContainerDied","Data":"c2b3c1829a42b0aba5ac0da7265799e3b8331a3e64a14b137527b50011721130"} Oct 04 05:57:04 crc kubenswrapper[4802]: I1004 05:57:04.311515 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crwfg" event={"ID":"9417e3fe-96e5-47d5-adfa-630a7182bc53","Type":"ContainerStarted","Data":"6e0f64086ffd40e288b8056d4bb5c4ee3f122bddec53f2875262b3fede08f1ae"} Oct 04 05:57:04 crc kubenswrapper[4802]: I1004 05:57:04.313212 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 05:57:05 crc kubenswrapper[4802]: I1004 05:57:05.324318 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crwfg" event={"ID":"9417e3fe-96e5-47d5-adfa-630a7182bc53","Type":"ContainerStarted","Data":"66a37a55bd8e2e361aeb5ff659b93ae6ebc69f430f936a0b3b8490d3952ad05e"} Oct 04 05:57:06 crc kubenswrapper[4802]: I1004 05:57:06.334796 4802 generic.go:334] "Generic (PLEG): container finished" podID="9417e3fe-96e5-47d5-adfa-630a7182bc53" containerID="66a37a55bd8e2e361aeb5ff659b93ae6ebc69f430f936a0b3b8490d3952ad05e" exitCode=0 Oct 04 05:57:06 crc kubenswrapper[4802]: I1004 05:57:06.334858 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crwfg" event={"ID":"9417e3fe-96e5-47d5-adfa-630a7182bc53","Type":"ContainerDied","Data":"66a37a55bd8e2e361aeb5ff659b93ae6ebc69f430f936a0b3b8490d3952ad05e"} Oct 04 05:57:07 crc kubenswrapper[4802]: I1004 05:57:07.346555 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crwfg" event={"ID":"9417e3fe-96e5-47d5-adfa-630a7182bc53","Type":"ContainerStarted","Data":"ffd8fea418e27fa820f28b0c86e40a695cc94e6279dbc343ad1fc81fc11a9b9c"} Oct 04 05:57:07 crc kubenswrapper[4802]: I1004 05:57:07.372477 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-crwfg" podStartSLOduration=1.92156606 podStartE2EDuration="4.372449486s" podCreationTimestamp="2025-10-04 05:57:03 +0000 UTC" firstStartedPulling="2025-10-04 05:57:04.312987101 +0000 UTC m=+4266.720987726" lastFinishedPulling="2025-10-04 05:57:06.763870527 +0000 UTC m=+4269.171871152" observedRunningTime="2025-10-04 05:57:07.364794746 +0000 UTC m=+4269.772795421" watchObservedRunningTime="2025-10-04 05:57:07.372449486 +0000 UTC m=+4269.780450121" Oct 04 05:57:13 crc kubenswrapper[4802]: I1004 05:57:13.407993 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:13 crc kubenswrapper[4802]: I1004 05:57:13.408398 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:13 crc kubenswrapper[4802]: I1004 05:57:13.456368 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:14 crc kubenswrapper[4802]: I1004 05:57:14.447756 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:14 crc kubenswrapper[4802]: I1004 05:57:14.496352 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crwfg"] Oct 04 05:57:16 crc kubenswrapper[4802]: I1004 05:57:16.434071 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-crwfg" podUID="9417e3fe-96e5-47d5-adfa-630a7182bc53" containerName="registry-server" containerID="cri-o://ffd8fea418e27fa820f28b0c86e40a695cc94e6279dbc343ad1fc81fc11a9b9c" gracePeriod=2 Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.001136 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.174432 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9417e3fe-96e5-47d5-adfa-630a7182bc53-utilities\") pod \"9417e3fe-96e5-47d5-adfa-630a7182bc53\" (UID: \"9417e3fe-96e5-47d5-adfa-630a7182bc53\") " Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.174623 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5j24\" (UniqueName: \"kubernetes.io/projected/9417e3fe-96e5-47d5-adfa-630a7182bc53-kube-api-access-r5j24\") pod \"9417e3fe-96e5-47d5-adfa-630a7182bc53\" (UID: \"9417e3fe-96e5-47d5-adfa-630a7182bc53\") " Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.174875 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9417e3fe-96e5-47d5-adfa-630a7182bc53-catalog-content\") pod \"9417e3fe-96e5-47d5-adfa-630a7182bc53\" (UID: \"9417e3fe-96e5-47d5-adfa-630a7182bc53\") " Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.175499 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9417e3fe-96e5-47d5-adfa-630a7182bc53-utilities" (OuterVolumeSpecName: "utilities") pod "9417e3fe-96e5-47d5-adfa-630a7182bc53" (UID: "9417e3fe-96e5-47d5-adfa-630a7182bc53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.181939 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9417e3fe-96e5-47d5-adfa-630a7182bc53-kube-api-access-r5j24" (OuterVolumeSpecName: "kube-api-access-r5j24") pod "9417e3fe-96e5-47d5-adfa-630a7182bc53" (UID: "9417e3fe-96e5-47d5-adfa-630a7182bc53"). InnerVolumeSpecName "kube-api-access-r5j24". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.225986 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9417e3fe-96e5-47d5-adfa-630a7182bc53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9417e3fe-96e5-47d5-adfa-630a7182bc53" (UID: "9417e3fe-96e5-47d5-adfa-630a7182bc53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.277721 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9417e3fe-96e5-47d5-adfa-630a7182bc53-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.277755 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9417e3fe-96e5-47d5-adfa-630a7182bc53-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.277765 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5j24\" (UniqueName: \"kubernetes.io/projected/9417e3fe-96e5-47d5-adfa-630a7182bc53-kube-api-access-r5j24\") on node \"crc\" DevicePath \"\"" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.459600 4802 generic.go:334] "Generic (PLEG): container finished" podID="9417e3fe-96e5-47d5-adfa-630a7182bc53" containerID="ffd8fea418e27fa820f28b0c86e40a695cc94e6279dbc343ad1fc81fc11a9b9c" exitCode=0 Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.459652 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crwfg" event={"ID":"9417e3fe-96e5-47d5-adfa-630a7182bc53","Type":"ContainerDied","Data":"ffd8fea418e27fa820f28b0c86e40a695cc94e6279dbc343ad1fc81fc11a9b9c"} Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.459682 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crwfg" event={"ID":"9417e3fe-96e5-47d5-adfa-630a7182bc53","Type":"ContainerDied","Data":"6e0f64086ffd40e288b8056d4bb5c4ee3f122bddec53f2875262b3fede08f1ae"} Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.459689 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crwfg" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.459698 4802 scope.go:117] "RemoveContainer" containerID="ffd8fea418e27fa820f28b0c86e40a695cc94e6279dbc343ad1fc81fc11a9b9c" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.496571 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crwfg"] Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.504157 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-crwfg"] Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.508661 4802 scope.go:117] "RemoveContainer" containerID="66a37a55bd8e2e361aeb5ff659b93ae6ebc69f430f936a0b3b8490d3952ad05e" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.879081 4802 scope.go:117] "RemoveContainer" containerID="c2b3c1829a42b0aba5ac0da7265799e3b8331a3e64a14b137527b50011721130" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.953908 4802 scope.go:117] "RemoveContainer" containerID="ffd8fea418e27fa820f28b0c86e40a695cc94e6279dbc343ad1fc81fc11a9b9c" Oct 04 05:57:17 crc kubenswrapper[4802]: E1004 05:57:17.954332 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd8fea418e27fa820f28b0c86e40a695cc94e6279dbc343ad1fc81fc11a9b9c\": container with ID starting with ffd8fea418e27fa820f28b0c86e40a695cc94e6279dbc343ad1fc81fc11a9b9c not found: ID does not exist" containerID="ffd8fea418e27fa820f28b0c86e40a695cc94e6279dbc343ad1fc81fc11a9b9c" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.954379 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd8fea418e27fa820f28b0c86e40a695cc94e6279dbc343ad1fc81fc11a9b9c"} err="failed to get container status \"ffd8fea418e27fa820f28b0c86e40a695cc94e6279dbc343ad1fc81fc11a9b9c\": rpc error: code = NotFound desc = could not find container \"ffd8fea418e27fa820f28b0c86e40a695cc94e6279dbc343ad1fc81fc11a9b9c\": container with ID starting with ffd8fea418e27fa820f28b0c86e40a695cc94e6279dbc343ad1fc81fc11a9b9c not found: ID does not exist" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.954411 4802 scope.go:117] "RemoveContainer" containerID="66a37a55bd8e2e361aeb5ff659b93ae6ebc69f430f936a0b3b8490d3952ad05e" Oct 04 05:57:17 crc kubenswrapper[4802]: E1004 05:57:17.954847 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66a37a55bd8e2e361aeb5ff659b93ae6ebc69f430f936a0b3b8490d3952ad05e\": container with ID starting with 66a37a55bd8e2e361aeb5ff659b93ae6ebc69f430f936a0b3b8490d3952ad05e not found: ID does not exist" containerID="66a37a55bd8e2e361aeb5ff659b93ae6ebc69f430f936a0b3b8490d3952ad05e" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.954884 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66a37a55bd8e2e361aeb5ff659b93ae6ebc69f430f936a0b3b8490d3952ad05e"} err="failed to get container status \"66a37a55bd8e2e361aeb5ff659b93ae6ebc69f430f936a0b3b8490d3952ad05e\": rpc error: code = NotFound desc = could not find container \"66a37a55bd8e2e361aeb5ff659b93ae6ebc69f430f936a0b3b8490d3952ad05e\": container with ID starting with 66a37a55bd8e2e361aeb5ff659b93ae6ebc69f430f936a0b3b8490d3952ad05e not found: ID does not exist" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.954905 4802 scope.go:117] "RemoveContainer" containerID="c2b3c1829a42b0aba5ac0da7265799e3b8331a3e64a14b137527b50011721130" Oct 04 05:57:17 crc kubenswrapper[4802]: E1004 05:57:17.955212 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b3c1829a42b0aba5ac0da7265799e3b8331a3e64a14b137527b50011721130\": container with ID starting with c2b3c1829a42b0aba5ac0da7265799e3b8331a3e64a14b137527b50011721130 not found: ID does not exist" containerID="c2b3c1829a42b0aba5ac0da7265799e3b8331a3e64a14b137527b50011721130" Oct 04 05:57:17 crc kubenswrapper[4802]: I1004 05:57:17.955268 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b3c1829a42b0aba5ac0da7265799e3b8331a3e64a14b137527b50011721130"} err="failed to get container status \"c2b3c1829a42b0aba5ac0da7265799e3b8331a3e64a14b137527b50011721130\": rpc error: code = NotFound desc = could not find container \"c2b3c1829a42b0aba5ac0da7265799e3b8331a3e64a14b137527b50011721130\": container with ID starting with c2b3c1829a42b0aba5ac0da7265799e3b8331a3e64a14b137527b50011721130 not found: ID does not exist" Oct 04 05:57:18 crc kubenswrapper[4802]: I1004 05:57:18.370816 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9417e3fe-96e5-47d5-adfa-630a7182bc53" path="/var/lib/kubelet/pods/9417e3fe-96e5-47d5-adfa-630a7182bc53/volumes" Oct 04 05:57:22 crc kubenswrapper[4802]: I1004 05:57:22.662302 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:57:22 crc kubenswrapper[4802]: I1004 05:57:22.662818 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:57:52 crc kubenswrapper[4802]: I1004 05:57:52.662577 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:57:52 crc kubenswrapper[4802]: I1004 05:57:52.663288 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:58:22 crc kubenswrapper[4802]: I1004 05:58:22.662748 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 05:58:22 crc kubenswrapper[4802]: I1004 05:58:22.663264 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 05:58:22 crc kubenswrapper[4802]: I1004 05:58:22.663310 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 05:58:22 crc kubenswrapper[4802]: I1004 05:58:22.664130 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9c7b065670f04eae005b3fb1e676466d6a655377b55aa24d3b7a7403f013898"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 05:58:22 crc kubenswrapper[4802]: I1004 05:58:22.664197 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://b9c7b065670f04eae005b3fb1e676466d6a655377b55aa24d3b7a7403f013898" gracePeriod=600 Oct 04 05:58:23 crc kubenswrapper[4802]: I1004 05:58:23.074339 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="b9c7b065670f04eae005b3fb1e676466d6a655377b55aa24d3b7a7403f013898" exitCode=0 Oct 04 05:58:23 crc kubenswrapper[4802]: I1004 05:58:23.074412 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"b9c7b065670f04eae005b3fb1e676466d6a655377b55aa24d3b7a7403f013898"} Oct 04 05:58:23 crc kubenswrapper[4802]: I1004 05:58:23.074737 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce"} Oct 04 05:58:23 crc kubenswrapper[4802]: I1004 05:58:23.074762 4802 scope.go:117] "RemoveContainer" containerID="dad535f02977e2c222dbcff8ab7d3a8ed90c129af2df9c1e665a0efb4f9b4509" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.613275 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lkd76"] Oct 04 05:58:37 crc kubenswrapper[4802]: E1004 05:58:37.614321 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9417e3fe-96e5-47d5-adfa-630a7182bc53" containerName="extract-content" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.614338 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="9417e3fe-96e5-47d5-adfa-630a7182bc53" containerName="extract-content" Oct 04 05:58:37 crc kubenswrapper[4802]: E1004 05:58:37.614353 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9417e3fe-96e5-47d5-adfa-630a7182bc53" containerName="extract-utilities" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.614360 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="9417e3fe-96e5-47d5-adfa-630a7182bc53" containerName="extract-utilities" Oct 04 05:58:37 crc kubenswrapper[4802]: E1004 05:58:37.614374 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9417e3fe-96e5-47d5-adfa-630a7182bc53" containerName="registry-server" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.614382 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="9417e3fe-96e5-47d5-adfa-630a7182bc53" containerName="registry-server" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.616253 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="9417e3fe-96e5-47d5-adfa-630a7182bc53" containerName="registry-server" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.637711 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.654265 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lkd76"] Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.755460 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a42e2eae-9210-436f-ac49-9471a96aef8d-catalog-content\") pod \"community-operators-lkd76\" (UID: \"a42e2eae-9210-436f-ac49-9471a96aef8d\") " pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.755666 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c8r6\" (UniqueName: \"kubernetes.io/projected/a42e2eae-9210-436f-ac49-9471a96aef8d-kube-api-access-6c8r6\") pod \"community-operators-lkd76\" (UID: \"a42e2eae-9210-436f-ac49-9471a96aef8d\") " pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.755787 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a42e2eae-9210-436f-ac49-9471a96aef8d-utilities\") pod \"community-operators-lkd76\" (UID: \"a42e2eae-9210-436f-ac49-9471a96aef8d\") " pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.858204 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a42e2eae-9210-436f-ac49-9471a96aef8d-catalog-content\") pod \"community-operators-lkd76\" (UID: \"a42e2eae-9210-436f-ac49-9471a96aef8d\") " pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.858292 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c8r6\" (UniqueName: \"kubernetes.io/projected/a42e2eae-9210-436f-ac49-9471a96aef8d-kube-api-access-6c8r6\") pod \"community-operators-lkd76\" (UID: \"a42e2eae-9210-436f-ac49-9471a96aef8d\") " pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.858317 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a42e2eae-9210-436f-ac49-9471a96aef8d-utilities\") pod \"community-operators-lkd76\" (UID: \"a42e2eae-9210-436f-ac49-9471a96aef8d\") " pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.858925 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a42e2eae-9210-436f-ac49-9471a96aef8d-catalog-content\") pod \"community-operators-lkd76\" (UID: \"a42e2eae-9210-436f-ac49-9471a96aef8d\") " pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.858948 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a42e2eae-9210-436f-ac49-9471a96aef8d-utilities\") pod \"community-operators-lkd76\" (UID: \"a42e2eae-9210-436f-ac49-9471a96aef8d\") " pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:37 crc kubenswrapper[4802]: I1004 05:58:37.895378 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c8r6\" (UniqueName: \"kubernetes.io/projected/a42e2eae-9210-436f-ac49-9471a96aef8d-kube-api-access-6c8r6\") pod \"community-operators-lkd76\" (UID: \"a42e2eae-9210-436f-ac49-9471a96aef8d\") " pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:38 crc kubenswrapper[4802]: I1004 05:58:38.023308 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:38 crc kubenswrapper[4802]: I1004 05:58:38.639011 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lkd76"] Oct 04 05:58:39 crc kubenswrapper[4802]: I1004 05:58:39.249574 4802 generic.go:334] "Generic (PLEG): container finished" podID="a42e2eae-9210-436f-ac49-9471a96aef8d" containerID="fe89cbd66637ae64bf8e3f6c33bf254dc6bbd8d50af2d6cd2ca4deb3a8cd359d" exitCode=0 Oct 04 05:58:39 crc kubenswrapper[4802]: I1004 05:58:39.249680 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkd76" event={"ID":"a42e2eae-9210-436f-ac49-9471a96aef8d","Type":"ContainerDied","Data":"fe89cbd66637ae64bf8e3f6c33bf254dc6bbd8d50af2d6cd2ca4deb3a8cd359d"} Oct 04 05:58:39 crc kubenswrapper[4802]: I1004 05:58:39.249852 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkd76" event={"ID":"a42e2eae-9210-436f-ac49-9471a96aef8d","Type":"ContainerStarted","Data":"0b0f04f7285e767e27ec4cac387f7211bf7f12fe76a97b2a1b4e33bab5ec53dc"} Oct 04 05:58:41 crc kubenswrapper[4802]: I1004 05:58:41.276482 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkd76" event={"ID":"a42e2eae-9210-436f-ac49-9471a96aef8d","Type":"ContainerStarted","Data":"c36959fad23bff9d36bc6d9fd77ba5e12b7e9d3bf3d373e926c442fe27bd0cbc"} Oct 04 05:58:42 crc kubenswrapper[4802]: I1004 05:58:42.290125 4802 generic.go:334] "Generic (PLEG): container finished" podID="a42e2eae-9210-436f-ac49-9471a96aef8d" containerID="c36959fad23bff9d36bc6d9fd77ba5e12b7e9d3bf3d373e926c442fe27bd0cbc" exitCode=0 Oct 04 05:58:42 crc kubenswrapper[4802]: I1004 05:58:42.290502 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkd76" event={"ID":"a42e2eae-9210-436f-ac49-9471a96aef8d","Type":"ContainerDied","Data":"c36959fad23bff9d36bc6d9fd77ba5e12b7e9d3bf3d373e926c442fe27bd0cbc"} Oct 04 05:58:43 crc kubenswrapper[4802]: I1004 05:58:43.303474 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkd76" event={"ID":"a42e2eae-9210-436f-ac49-9471a96aef8d","Type":"ContainerStarted","Data":"c55e5367d55fdbf8b39dd4e2255271b2b39c5d29481b19436ebcb3de34e11428"} Oct 04 05:58:43 crc kubenswrapper[4802]: I1004 05:58:43.330390 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lkd76" podStartSLOduration=2.6836197090000002 podStartE2EDuration="6.330367187s" podCreationTimestamp="2025-10-04 05:58:37 +0000 UTC" firstStartedPulling="2025-10-04 05:58:39.251626323 +0000 UTC m=+4361.659626938" lastFinishedPulling="2025-10-04 05:58:42.898373791 +0000 UTC m=+4365.306374416" observedRunningTime="2025-10-04 05:58:43.3263019 +0000 UTC m=+4365.734302555" watchObservedRunningTime="2025-10-04 05:58:43.330367187 +0000 UTC m=+4365.738367812" Oct 04 05:58:48 crc kubenswrapper[4802]: I1004 05:58:48.024305 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:48 crc kubenswrapper[4802]: I1004 05:58:48.024890 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:48 crc kubenswrapper[4802]: I1004 05:58:48.070940 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:48 crc kubenswrapper[4802]: I1004 05:58:48.396764 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:48 crc kubenswrapper[4802]: I1004 05:58:48.445512 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lkd76"] Oct 04 05:58:50 crc kubenswrapper[4802]: I1004 05:58:50.365725 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lkd76" podUID="a42e2eae-9210-436f-ac49-9471a96aef8d" containerName="registry-server" containerID="cri-o://c55e5367d55fdbf8b39dd4e2255271b2b39c5d29481b19436ebcb3de34e11428" gracePeriod=2 Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.063898 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.128628 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a42e2eae-9210-436f-ac49-9471a96aef8d-utilities\") pod \"a42e2eae-9210-436f-ac49-9471a96aef8d\" (UID: \"a42e2eae-9210-436f-ac49-9471a96aef8d\") " Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.128786 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c8r6\" (UniqueName: \"kubernetes.io/projected/a42e2eae-9210-436f-ac49-9471a96aef8d-kube-api-access-6c8r6\") pod \"a42e2eae-9210-436f-ac49-9471a96aef8d\" (UID: \"a42e2eae-9210-436f-ac49-9471a96aef8d\") " Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.128862 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a42e2eae-9210-436f-ac49-9471a96aef8d-catalog-content\") pod \"a42e2eae-9210-436f-ac49-9471a96aef8d\" (UID: \"a42e2eae-9210-436f-ac49-9471a96aef8d\") " Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.130122 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a42e2eae-9210-436f-ac49-9471a96aef8d-utilities" (OuterVolumeSpecName: "utilities") pod "a42e2eae-9210-436f-ac49-9471a96aef8d" (UID: "a42e2eae-9210-436f-ac49-9471a96aef8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.140879 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42e2eae-9210-436f-ac49-9471a96aef8d-kube-api-access-6c8r6" (OuterVolumeSpecName: "kube-api-access-6c8r6") pod "a42e2eae-9210-436f-ac49-9471a96aef8d" (UID: "a42e2eae-9210-436f-ac49-9471a96aef8d"). InnerVolumeSpecName "kube-api-access-6c8r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.203296 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a42e2eae-9210-436f-ac49-9471a96aef8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a42e2eae-9210-436f-ac49-9471a96aef8d" (UID: "a42e2eae-9210-436f-ac49-9471a96aef8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.230190 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a42e2eae-9210-436f-ac49-9471a96aef8d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.230497 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a42e2eae-9210-436f-ac49-9471a96aef8d-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.230510 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c8r6\" (UniqueName: \"kubernetes.io/projected/a42e2eae-9210-436f-ac49-9471a96aef8d-kube-api-access-6c8r6\") on node \"crc\" DevicePath \"\"" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.377256 4802 generic.go:334] "Generic (PLEG): container finished" podID="a42e2eae-9210-436f-ac49-9471a96aef8d" containerID="c55e5367d55fdbf8b39dd4e2255271b2b39c5d29481b19436ebcb3de34e11428" exitCode=0 Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.377320 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkd76" event={"ID":"a42e2eae-9210-436f-ac49-9471a96aef8d","Type":"ContainerDied","Data":"c55e5367d55fdbf8b39dd4e2255271b2b39c5d29481b19436ebcb3de34e11428"} Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.377813 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkd76" event={"ID":"a42e2eae-9210-436f-ac49-9471a96aef8d","Type":"ContainerDied","Data":"0b0f04f7285e767e27ec4cac387f7211bf7f12fe76a97b2a1b4e33bab5ec53dc"} Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.377841 4802 scope.go:117] "RemoveContainer" containerID="c55e5367d55fdbf8b39dd4e2255271b2b39c5d29481b19436ebcb3de34e11428" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.377359 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkd76" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.401493 4802 scope.go:117] "RemoveContainer" containerID="c36959fad23bff9d36bc6d9fd77ba5e12b7e9d3bf3d373e926c442fe27bd0cbc" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.413020 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lkd76"] Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.421406 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lkd76"] Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.446149 4802 scope.go:117] "RemoveContainer" containerID="fe89cbd66637ae64bf8e3f6c33bf254dc6bbd8d50af2d6cd2ca4deb3a8cd359d" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.482140 4802 scope.go:117] "RemoveContainer" containerID="c55e5367d55fdbf8b39dd4e2255271b2b39c5d29481b19436ebcb3de34e11428" Oct 04 05:58:51 crc kubenswrapper[4802]: E1004 05:58:51.482631 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55e5367d55fdbf8b39dd4e2255271b2b39c5d29481b19436ebcb3de34e11428\": container with ID starting with c55e5367d55fdbf8b39dd4e2255271b2b39c5d29481b19436ebcb3de34e11428 not found: ID does not exist" containerID="c55e5367d55fdbf8b39dd4e2255271b2b39c5d29481b19436ebcb3de34e11428" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.482752 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55e5367d55fdbf8b39dd4e2255271b2b39c5d29481b19436ebcb3de34e11428"} err="failed to get container status \"c55e5367d55fdbf8b39dd4e2255271b2b39c5d29481b19436ebcb3de34e11428\": rpc error: code = NotFound desc = could not find container \"c55e5367d55fdbf8b39dd4e2255271b2b39c5d29481b19436ebcb3de34e11428\": container with ID starting with c55e5367d55fdbf8b39dd4e2255271b2b39c5d29481b19436ebcb3de34e11428 not found: ID does not exist" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.482861 4802 scope.go:117] "RemoveContainer" containerID="c36959fad23bff9d36bc6d9fd77ba5e12b7e9d3bf3d373e926c442fe27bd0cbc" Oct 04 05:58:51 crc kubenswrapper[4802]: E1004 05:58:51.483466 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c36959fad23bff9d36bc6d9fd77ba5e12b7e9d3bf3d373e926c442fe27bd0cbc\": container with ID starting with c36959fad23bff9d36bc6d9fd77ba5e12b7e9d3bf3d373e926c442fe27bd0cbc not found: ID does not exist" containerID="c36959fad23bff9d36bc6d9fd77ba5e12b7e9d3bf3d373e926c442fe27bd0cbc" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.483520 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c36959fad23bff9d36bc6d9fd77ba5e12b7e9d3bf3d373e926c442fe27bd0cbc"} err="failed to get container status \"c36959fad23bff9d36bc6d9fd77ba5e12b7e9d3bf3d373e926c442fe27bd0cbc\": rpc error: code = NotFound desc = could not find container \"c36959fad23bff9d36bc6d9fd77ba5e12b7e9d3bf3d373e926c442fe27bd0cbc\": container with ID starting with c36959fad23bff9d36bc6d9fd77ba5e12b7e9d3bf3d373e926c442fe27bd0cbc not found: ID does not exist" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.483552 4802 scope.go:117] "RemoveContainer" containerID="fe89cbd66637ae64bf8e3f6c33bf254dc6bbd8d50af2d6cd2ca4deb3a8cd359d" Oct 04 05:58:51 crc kubenswrapper[4802]: E1004 05:58:51.484041 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe89cbd66637ae64bf8e3f6c33bf254dc6bbd8d50af2d6cd2ca4deb3a8cd359d\": container with ID starting with fe89cbd66637ae64bf8e3f6c33bf254dc6bbd8d50af2d6cd2ca4deb3a8cd359d not found: ID does not exist" containerID="fe89cbd66637ae64bf8e3f6c33bf254dc6bbd8d50af2d6cd2ca4deb3a8cd359d" Oct 04 05:58:51 crc kubenswrapper[4802]: I1004 05:58:51.484075 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe89cbd66637ae64bf8e3f6c33bf254dc6bbd8d50af2d6cd2ca4deb3a8cd359d"} err="failed to get container status \"fe89cbd66637ae64bf8e3f6c33bf254dc6bbd8d50af2d6cd2ca4deb3a8cd359d\": rpc error: code = NotFound desc = could not find container \"fe89cbd66637ae64bf8e3f6c33bf254dc6bbd8d50af2d6cd2ca4deb3a8cd359d\": container with ID starting with fe89cbd66637ae64bf8e3f6c33bf254dc6bbd8d50af2d6cd2ca4deb3a8cd359d not found: ID does not exist" Oct 04 05:58:52 crc kubenswrapper[4802]: I1004 05:58:52.373498 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a42e2eae-9210-436f-ac49-9471a96aef8d" path="/var/lib/kubelet/pods/a42e2eae-9210-436f-ac49-9471a96aef8d/volumes" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.154323 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2"] Oct 04 06:00:00 crc kubenswrapper[4802]: E1004 06:00:00.155448 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a42e2eae-9210-436f-ac49-9471a96aef8d" containerName="extract-utilities" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.155464 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a42e2eae-9210-436f-ac49-9471a96aef8d" containerName="extract-utilities" Oct 04 06:00:00 crc kubenswrapper[4802]: E1004 06:00:00.155479 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a42e2eae-9210-436f-ac49-9471a96aef8d" containerName="registry-server" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.155487 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a42e2eae-9210-436f-ac49-9471a96aef8d" containerName="registry-server" Oct 04 06:00:00 crc kubenswrapper[4802]: E1004 06:00:00.155525 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a42e2eae-9210-436f-ac49-9471a96aef8d" containerName="extract-content" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.155534 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="a42e2eae-9210-436f-ac49-9471a96aef8d" containerName="extract-content" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.155772 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="a42e2eae-9210-436f-ac49-9471a96aef8d" containerName="registry-server" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.156544 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.158866 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.166162 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.167277 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2"] Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.203919 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgpc4\" (UniqueName: \"kubernetes.io/projected/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-kube-api-access-qgpc4\") pod \"collect-profiles-29325960-gk4t2\" (UID: \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.204080 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-secret-volume\") pod \"collect-profiles-29325960-gk4t2\" (UID: \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.204177 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-config-volume\") pod \"collect-profiles-29325960-gk4t2\" (UID: \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.306488 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgpc4\" (UniqueName: \"kubernetes.io/projected/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-kube-api-access-qgpc4\") pod \"collect-profiles-29325960-gk4t2\" (UID: \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.306559 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-secret-volume\") pod \"collect-profiles-29325960-gk4t2\" (UID: \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.306594 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-config-volume\") pod \"collect-profiles-29325960-gk4t2\" (UID: \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.308241 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-config-volume\") pod \"collect-profiles-29325960-gk4t2\" (UID: \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.315814 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-secret-volume\") pod \"collect-profiles-29325960-gk4t2\" (UID: \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.332001 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgpc4\" (UniqueName: \"kubernetes.io/projected/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-kube-api-access-qgpc4\") pod \"collect-profiles-29325960-gk4t2\" (UID: \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" Oct 04 06:00:00 crc kubenswrapper[4802]: I1004 06:00:00.487081 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" Oct 04 06:00:01 crc kubenswrapper[4802]: I1004 06:00:01.331825 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2"] Oct 04 06:00:02 crc kubenswrapper[4802]: I1004 06:00:02.036781 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" event={"ID":"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2","Type":"ContainerStarted","Data":"9b1f36fc1bc59d1cc9ee2dad0cfe1c325596a814eebbf3fd1759959c777b0cc9"} Oct 04 06:00:02 crc kubenswrapper[4802]: I1004 06:00:02.037321 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" event={"ID":"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2","Type":"ContainerStarted","Data":"d43492c9e7892811803db75f797bc3a68be9fbabc655b2d79c175a68385df6e0"} Oct 04 06:00:02 crc kubenswrapper[4802]: I1004 06:00:02.064624 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" podStartSLOduration=2.064604536 podStartE2EDuration="2.064604536s" podCreationTimestamp="2025-10-04 06:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 06:00:02.056421852 +0000 UTC m=+4444.464422487" watchObservedRunningTime="2025-10-04 06:00:02.064604536 +0000 UTC m=+4444.472605161" Oct 04 06:00:03 crc kubenswrapper[4802]: I1004 06:00:03.048127 4802 generic.go:334] "Generic (PLEG): container finished" podID="9b452608-08d6-4d4c-8f23-8ba5f74ce3c2" containerID="9b1f36fc1bc59d1cc9ee2dad0cfe1c325596a814eebbf3fd1759959c777b0cc9" exitCode=0 Oct 04 06:00:03 crc kubenswrapper[4802]: I1004 06:00:03.048189 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" event={"ID":"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2","Type":"ContainerDied","Data":"9b1f36fc1bc59d1cc9ee2dad0cfe1c325596a814eebbf3fd1759959c777b0cc9"} Oct 04 06:00:04 crc kubenswrapper[4802]: I1004 06:00:04.492892 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" Oct 04 06:00:04 crc kubenswrapper[4802]: I1004 06:00:04.604114 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-secret-volume\") pod \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\" (UID: \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\") " Oct 04 06:00:04 crc kubenswrapper[4802]: I1004 06:00:04.604230 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-config-volume\") pod \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\" (UID: \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\") " Oct 04 06:00:04 crc kubenswrapper[4802]: I1004 06:00:04.604338 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgpc4\" (UniqueName: \"kubernetes.io/projected/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-kube-api-access-qgpc4\") pod \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\" (UID: \"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2\") " Oct 04 06:00:04 crc kubenswrapper[4802]: I1004 06:00:04.604814 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b452608-08d6-4d4c-8f23-8ba5f74ce3c2" (UID: "9b452608-08d6-4d4c-8f23-8ba5f74ce3c2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 06:00:04 crc kubenswrapper[4802]: I1004 06:00:04.605278 4802 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-config-volume\") on node \"crc\" DevicePath \"\"" Oct 04 06:00:04 crc kubenswrapper[4802]: I1004 06:00:04.611801 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b452608-08d6-4d4c-8f23-8ba5f74ce3c2" (UID: "9b452608-08d6-4d4c-8f23-8ba5f74ce3c2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 06:00:04 crc kubenswrapper[4802]: I1004 06:00:04.614922 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-kube-api-access-qgpc4" (OuterVolumeSpecName: "kube-api-access-qgpc4") pod "9b452608-08d6-4d4c-8f23-8ba5f74ce3c2" (UID: "9b452608-08d6-4d4c-8f23-8ba5f74ce3c2"). InnerVolumeSpecName "kube-api-access-qgpc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:00:04 crc kubenswrapper[4802]: I1004 06:00:04.707660 4802 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 04 06:00:04 crc kubenswrapper[4802]: I1004 06:00:04.708065 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgpc4\" (UniqueName: \"kubernetes.io/projected/9b452608-08d6-4d4c-8f23-8ba5f74ce3c2-kube-api-access-qgpc4\") on node \"crc\" DevicePath \"\"" Oct 04 06:00:05 crc kubenswrapper[4802]: I1004 06:00:05.067045 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" event={"ID":"9b452608-08d6-4d4c-8f23-8ba5f74ce3c2","Type":"ContainerDied","Data":"d43492c9e7892811803db75f797bc3a68be9fbabc655b2d79c175a68385df6e0"} Oct 04 06:00:05 crc kubenswrapper[4802]: I1004 06:00:05.067092 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43492c9e7892811803db75f797bc3a68be9fbabc655b2d79c175a68385df6e0" Oct 04 06:00:05 crc kubenswrapper[4802]: I1004 06:00:05.067106 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325960-gk4t2" Oct 04 06:00:05 crc kubenswrapper[4802]: I1004 06:00:05.578229 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx"] Oct 04 06:00:05 crc kubenswrapper[4802]: I1004 06:00:05.588785 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325915-twnvx"] Oct 04 06:00:06 crc kubenswrapper[4802]: I1004 06:00:06.371718 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="727b1862-5f93-460c-be9c-6bdd40d2a95c" path="/var/lib/kubelet/pods/727b1862-5f93-460c-be9c-6bdd40d2a95c/volumes" Oct 04 06:00:15 crc kubenswrapper[4802]: I1004 06:00:15.697138 4802 scope.go:117] "RemoveContainer" containerID="c73c49a597409c6e1db4b4726e9f34e15b5d9f4d5599d4847ee17d23d06d107e" Oct 04 06:00:52 crc kubenswrapper[4802]: I1004 06:00:52.662719 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 06:00:52 crc kubenswrapper[4802]: I1004 06:00:52.663237 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.157594 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29325961-9qgmt"] Oct 04 06:01:00 crc kubenswrapper[4802]: E1004 06:01:00.158570 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b452608-08d6-4d4c-8f23-8ba5f74ce3c2" containerName="collect-profiles" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.158588 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b452608-08d6-4d4c-8f23-8ba5f74ce3c2" containerName="collect-profiles" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.158855 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b452608-08d6-4d4c-8f23-8ba5f74ce3c2" containerName="collect-profiles" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.159603 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.178326 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29325961-9qgmt"] Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.372243 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-fernet-keys\") pod \"keystone-cron-29325961-9qgmt\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.372511 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdsw7\" (UniqueName: \"kubernetes.io/projected/b73fd896-44ec-4db0-b095-86311908fc72-kube-api-access-vdsw7\") pod \"keystone-cron-29325961-9qgmt\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.372613 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-config-data\") pod \"keystone-cron-29325961-9qgmt\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.372769 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-combined-ca-bundle\") pod \"keystone-cron-29325961-9qgmt\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.476092 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdsw7\" (UniqueName: \"kubernetes.io/projected/b73fd896-44ec-4db0-b095-86311908fc72-kube-api-access-vdsw7\") pod \"keystone-cron-29325961-9qgmt\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.476272 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-config-data\") pod \"keystone-cron-29325961-9qgmt\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.476525 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-combined-ca-bundle\") pod \"keystone-cron-29325961-9qgmt\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.476664 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-fernet-keys\") pod \"keystone-cron-29325961-9qgmt\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.483269 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-fernet-keys\") pod \"keystone-cron-29325961-9qgmt\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.487187 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-combined-ca-bundle\") pod \"keystone-cron-29325961-9qgmt\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.487322 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-config-data\") pod \"keystone-cron-29325961-9qgmt\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.491814 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdsw7\" (UniqueName: \"kubernetes.io/projected/b73fd896-44ec-4db0-b095-86311908fc72-kube-api-access-vdsw7\") pod \"keystone-cron-29325961-9qgmt\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.509188 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:00 crc kubenswrapper[4802]: I1004 06:01:00.978900 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29325961-9qgmt"] Oct 04 06:01:01 crc kubenswrapper[4802]: I1004 06:01:01.569822 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29325961-9qgmt" event={"ID":"b73fd896-44ec-4db0-b095-86311908fc72","Type":"ContainerStarted","Data":"e94e861c091a29739e8cb017964d0157219e1a8198775125ec7c6e98df466880"} Oct 04 06:01:01 crc kubenswrapper[4802]: I1004 06:01:01.571392 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29325961-9qgmt" event={"ID":"b73fd896-44ec-4db0-b095-86311908fc72","Type":"ContainerStarted","Data":"859dd4b7ab0e1fa2904cd40153d7431170874b3fed6c035941294777e158181d"} Oct 04 06:01:01 crc kubenswrapper[4802]: I1004 06:01:01.603660 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29325961-9qgmt" podStartSLOduration=1.603620912 podStartE2EDuration="1.603620912s" podCreationTimestamp="2025-10-04 06:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-04 06:01:01.592041911 +0000 UTC m=+4504.000042536" watchObservedRunningTime="2025-10-04 06:01:01.603620912 +0000 UTC m=+4504.011621547" Oct 04 06:01:04 crc kubenswrapper[4802]: I1004 06:01:04.601756 4802 generic.go:334] "Generic (PLEG): container finished" podID="b73fd896-44ec-4db0-b095-86311908fc72" containerID="e94e861c091a29739e8cb017964d0157219e1a8198775125ec7c6e98df466880" exitCode=0 Oct 04 06:01:04 crc kubenswrapper[4802]: I1004 06:01:04.601879 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29325961-9qgmt" event={"ID":"b73fd896-44ec-4db0-b095-86311908fc72","Type":"ContainerDied","Data":"e94e861c091a29739e8cb017964d0157219e1a8198775125ec7c6e98df466880"} Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.047794 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.195032 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdsw7\" (UniqueName: \"kubernetes.io/projected/b73fd896-44ec-4db0-b095-86311908fc72-kube-api-access-vdsw7\") pod \"b73fd896-44ec-4db0-b095-86311908fc72\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.195185 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-config-data\") pod \"b73fd896-44ec-4db0-b095-86311908fc72\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.195253 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-fernet-keys\") pod \"b73fd896-44ec-4db0-b095-86311908fc72\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.195335 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-combined-ca-bundle\") pod \"b73fd896-44ec-4db0-b095-86311908fc72\" (UID: \"b73fd896-44ec-4db0-b095-86311908fc72\") " Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.201113 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b73fd896-44ec-4db0-b095-86311908fc72-kube-api-access-vdsw7" (OuterVolumeSpecName: "kube-api-access-vdsw7") pod "b73fd896-44ec-4db0-b095-86311908fc72" (UID: "b73fd896-44ec-4db0-b095-86311908fc72"). InnerVolumeSpecName "kube-api-access-vdsw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.201736 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b73fd896-44ec-4db0-b095-86311908fc72" (UID: "b73fd896-44ec-4db0-b095-86311908fc72"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.231214 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b73fd896-44ec-4db0-b095-86311908fc72" (UID: "b73fd896-44ec-4db0-b095-86311908fc72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.250048 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-config-data" (OuterVolumeSpecName: "config-data") pod "b73fd896-44ec-4db0-b095-86311908fc72" (UID: "b73fd896-44ec-4db0-b095-86311908fc72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.297658 4802 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.297698 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdsw7\" (UniqueName: \"kubernetes.io/projected/b73fd896-44ec-4db0-b095-86311908fc72-kube-api-access-vdsw7\") on node \"crc\" DevicePath \"\"" Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.297711 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.297723 4802 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b73fd896-44ec-4db0-b095-86311908fc72-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.619790 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29325961-9qgmt" event={"ID":"b73fd896-44ec-4db0-b095-86311908fc72","Type":"ContainerDied","Data":"859dd4b7ab0e1fa2904cd40153d7431170874b3fed6c035941294777e158181d"} Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.619834 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="859dd4b7ab0e1fa2904cd40153d7431170874b3fed6c035941294777e158181d" Oct 04 06:01:06 crc kubenswrapper[4802]: I1004 06:01:06.619908 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29325961-9qgmt" Oct 04 06:01:22 crc kubenswrapper[4802]: I1004 06:01:22.662788 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 06:01:22 crc kubenswrapper[4802]: I1004 06:01:22.663400 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 06:01:52 crc kubenswrapper[4802]: I1004 06:01:52.662782 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 06:01:52 crc kubenswrapper[4802]: I1004 06:01:52.663355 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 06:01:52 crc kubenswrapper[4802]: I1004 06:01:52.663400 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 06:01:52 crc kubenswrapper[4802]: I1004 06:01:52.664215 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 06:01:52 crc kubenswrapper[4802]: I1004 06:01:52.664285 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" gracePeriod=600 Oct 04 06:01:52 crc kubenswrapper[4802]: E1004 06:01:52.834417 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:01:53 crc kubenswrapper[4802]: I1004 06:01:53.054210 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" exitCode=0 Oct 04 06:01:53 crc kubenswrapper[4802]: I1004 06:01:53.054250 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce"} Oct 04 06:01:53 crc kubenswrapper[4802]: I1004 06:01:53.054279 4802 scope.go:117] "RemoveContainer" containerID="b9c7b065670f04eae005b3fb1e676466d6a655377b55aa24d3b7a7403f013898" Oct 04 06:01:53 crc kubenswrapper[4802]: I1004 06:01:53.054809 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:01:53 crc kubenswrapper[4802]: E1004 06:01:53.055024 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:02:07 crc kubenswrapper[4802]: I1004 06:02:07.359945 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:02:07 crc kubenswrapper[4802]: E1004 06:02:07.360753 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.386774 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fmwvs"] Oct 04 06:02:08 crc kubenswrapper[4802]: E1004 06:02:08.387475 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73fd896-44ec-4db0-b095-86311908fc72" containerName="keystone-cron" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.387489 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73fd896-44ec-4db0-b095-86311908fc72" containerName="keystone-cron" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.387686 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="b73fd896-44ec-4db0-b095-86311908fc72" containerName="keystone-cron" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.389029 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.400060 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fmwvs"] Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.415827 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3269473b-e69a-4b11-95e8-48e512f724bb-catalog-content\") pod \"redhat-operators-fmwvs\" (UID: \"3269473b-e69a-4b11-95e8-48e512f724bb\") " pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.415880 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txfcd\" (UniqueName: \"kubernetes.io/projected/3269473b-e69a-4b11-95e8-48e512f724bb-kube-api-access-txfcd\") pod \"redhat-operators-fmwvs\" (UID: \"3269473b-e69a-4b11-95e8-48e512f724bb\") " pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.416573 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3269473b-e69a-4b11-95e8-48e512f724bb-utilities\") pod \"redhat-operators-fmwvs\" (UID: \"3269473b-e69a-4b11-95e8-48e512f724bb\") " pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.519088 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3269473b-e69a-4b11-95e8-48e512f724bb-catalog-content\") pod \"redhat-operators-fmwvs\" (UID: \"3269473b-e69a-4b11-95e8-48e512f724bb\") " pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.519162 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txfcd\" (UniqueName: \"kubernetes.io/projected/3269473b-e69a-4b11-95e8-48e512f724bb-kube-api-access-txfcd\") pod \"redhat-operators-fmwvs\" (UID: \"3269473b-e69a-4b11-95e8-48e512f724bb\") " pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.519333 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3269473b-e69a-4b11-95e8-48e512f724bb-utilities\") pod \"redhat-operators-fmwvs\" (UID: \"3269473b-e69a-4b11-95e8-48e512f724bb\") " pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.519863 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3269473b-e69a-4b11-95e8-48e512f724bb-catalog-content\") pod \"redhat-operators-fmwvs\" (UID: \"3269473b-e69a-4b11-95e8-48e512f724bb\") " pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.519900 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3269473b-e69a-4b11-95e8-48e512f724bb-utilities\") pod \"redhat-operators-fmwvs\" (UID: \"3269473b-e69a-4b11-95e8-48e512f724bb\") " pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.544797 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txfcd\" (UniqueName: \"kubernetes.io/projected/3269473b-e69a-4b11-95e8-48e512f724bb-kube-api-access-txfcd\") pod \"redhat-operators-fmwvs\" (UID: \"3269473b-e69a-4b11-95e8-48e512f724bb\") " pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:08 crc kubenswrapper[4802]: I1004 06:02:08.725110 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:09 crc kubenswrapper[4802]: I1004 06:02:09.262068 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fmwvs"] Oct 04 06:02:10 crc kubenswrapper[4802]: I1004 06:02:10.214866 4802 generic.go:334] "Generic (PLEG): container finished" podID="3269473b-e69a-4b11-95e8-48e512f724bb" containerID="76e04e9a3e4870f2ad4a4c3e72767b37e63ff34e64f8709e05dd9958d5af7a82" exitCode=0 Oct 04 06:02:10 crc kubenswrapper[4802]: I1004 06:02:10.214978 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmwvs" event={"ID":"3269473b-e69a-4b11-95e8-48e512f724bb","Type":"ContainerDied","Data":"76e04e9a3e4870f2ad4a4c3e72767b37e63ff34e64f8709e05dd9958d5af7a82"} Oct 04 06:02:10 crc kubenswrapper[4802]: I1004 06:02:10.215417 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmwvs" event={"ID":"3269473b-e69a-4b11-95e8-48e512f724bb","Type":"ContainerStarted","Data":"34f4640be95027f52046fea7003acb8164f6c433e006dceda52d8bcc36fae706"} Oct 04 06:02:10 crc kubenswrapper[4802]: I1004 06:02:10.217302 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 06:02:11 crc kubenswrapper[4802]: I1004 06:02:11.226987 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmwvs" event={"ID":"3269473b-e69a-4b11-95e8-48e512f724bb","Type":"ContainerStarted","Data":"7673432261c68a37616c76f7ded8ef2385f16ab0250ee02ddcdea33381ea6ae7"} Oct 04 06:02:12 crc kubenswrapper[4802]: I1004 06:02:12.237387 4802 generic.go:334] "Generic (PLEG): container finished" podID="3269473b-e69a-4b11-95e8-48e512f724bb" containerID="7673432261c68a37616c76f7ded8ef2385f16ab0250ee02ddcdea33381ea6ae7" exitCode=0 Oct 04 06:02:12 crc kubenswrapper[4802]: I1004 06:02:12.237433 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmwvs" event={"ID":"3269473b-e69a-4b11-95e8-48e512f724bb","Type":"ContainerDied","Data":"7673432261c68a37616c76f7ded8ef2385f16ab0250ee02ddcdea33381ea6ae7"} Oct 04 06:02:13 crc kubenswrapper[4802]: I1004 06:02:13.248223 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmwvs" event={"ID":"3269473b-e69a-4b11-95e8-48e512f724bb","Type":"ContainerStarted","Data":"23e75031628e9c78848f6b8e859dab4428b632e0125394ecde8ad2e3d59b8ff1"} Oct 04 06:02:13 crc kubenswrapper[4802]: I1004 06:02:13.276584 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fmwvs" podStartSLOduration=2.635448527 podStartE2EDuration="5.27656494s" podCreationTimestamp="2025-10-04 06:02:08 +0000 UTC" firstStartedPulling="2025-10-04 06:02:10.217026011 +0000 UTC m=+4572.625026636" lastFinishedPulling="2025-10-04 06:02:12.858142424 +0000 UTC m=+4575.266143049" observedRunningTime="2025-10-04 06:02:13.270810266 +0000 UTC m=+4575.678810901" watchObservedRunningTime="2025-10-04 06:02:13.27656494 +0000 UTC m=+4575.684565575" Oct 04 06:02:18 crc kubenswrapper[4802]: I1004 06:02:18.367848 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:02:18 crc kubenswrapper[4802]: E1004 06:02:18.368717 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:02:18 crc kubenswrapper[4802]: I1004 06:02:18.725390 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:18 crc kubenswrapper[4802]: I1004 06:02:18.725715 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:18 crc kubenswrapper[4802]: I1004 06:02:18.781923 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:19 crc kubenswrapper[4802]: I1004 06:02:19.346281 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:19 crc kubenswrapper[4802]: I1004 06:02:19.418278 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fmwvs"] Oct 04 06:02:21 crc kubenswrapper[4802]: I1004 06:02:21.326329 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fmwvs" podUID="3269473b-e69a-4b11-95e8-48e512f724bb" containerName="registry-server" containerID="cri-o://23e75031628e9c78848f6b8e859dab4428b632e0125394ecde8ad2e3d59b8ff1" gracePeriod=2 Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.024524 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.201463 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txfcd\" (UniqueName: \"kubernetes.io/projected/3269473b-e69a-4b11-95e8-48e512f724bb-kube-api-access-txfcd\") pod \"3269473b-e69a-4b11-95e8-48e512f724bb\" (UID: \"3269473b-e69a-4b11-95e8-48e512f724bb\") " Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.201521 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3269473b-e69a-4b11-95e8-48e512f724bb-catalog-content\") pod \"3269473b-e69a-4b11-95e8-48e512f724bb\" (UID: \"3269473b-e69a-4b11-95e8-48e512f724bb\") " Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.201610 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3269473b-e69a-4b11-95e8-48e512f724bb-utilities\") pod \"3269473b-e69a-4b11-95e8-48e512f724bb\" (UID: \"3269473b-e69a-4b11-95e8-48e512f724bb\") " Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.202565 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3269473b-e69a-4b11-95e8-48e512f724bb-utilities" (OuterVolumeSpecName: "utilities") pod "3269473b-e69a-4b11-95e8-48e512f724bb" (UID: "3269473b-e69a-4b11-95e8-48e512f724bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.208091 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3269473b-e69a-4b11-95e8-48e512f724bb-kube-api-access-txfcd" (OuterVolumeSpecName: "kube-api-access-txfcd") pod "3269473b-e69a-4b11-95e8-48e512f724bb" (UID: "3269473b-e69a-4b11-95e8-48e512f724bb"). InnerVolumeSpecName "kube-api-access-txfcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.304058 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txfcd\" (UniqueName: \"kubernetes.io/projected/3269473b-e69a-4b11-95e8-48e512f724bb-kube-api-access-txfcd\") on node \"crc\" DevicePath \"\"" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.304090 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3269473b-e69a-4b11-95e8-48e512f724bb-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.305574 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3269473b-e69a-4b11-95e8-48e512f724bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3269473b-e69a-4b11-95e8-48e512f724bb" (UID: "3269473b-e69a-4b11-95e8-48e512f724bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.334388 4802 generic.go:334] "Generic (PLEG): container finished" podID="3269473b-e69a-4b11-95e8-48e512f724bb" containerID="23e75031628e9c78848f6b8e859dab4428b632e0125394ecde8ad2e3d59b8ff1" exitCode=0 Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.334431 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmwvs" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.334433 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmwvs" event={"ID":"3269473b-e69a-4b11-95e8-48e512f724bb","Type":"ContainerDied","Data":"23e75031628e9c78848f6b8e859dab4428b632e0125394ecde8ad2e3d59b8ff1"} Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.334543 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmwvs" event={"ID":"3269473b-e69a-4b11-95e8-48e512f724bb","Type":"ContainerDied","Data":"34f4640be95027f52046fea7003acb8164f6c433e006dceda52d8bcc36fae706"} Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.334564 4802 scope.go:117] "RemoveContainer" containerID="23e75031628e9c78848f6b8e859dab4428b632e0125394ecde8ad2e3d59b8ff1" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.356112 4802 scope.go:117] "RemoveContainer" containerID="7673432261c68a37616c76f7ded8ef2385f16ab0250ee02ddcdea33381ea6ae7" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.370287 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fmwvs"] Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.375170 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fmwvs"] Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.382092 4802 scope.go:117] "RemoveContainer" containerID="76e04e9a3e4870f2ad4a4c3e72767b37e63ff34e64f8709e05dd9958d5af7a82" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.405854 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3269473b-e69a-4b11-95e8-48e512f724bb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.427203 4802 scope.go:117] "RemoveContainer" containerID="23e75031628e9c78848f6b8e859dab4428b632e0125394ecde8ad2e3d59b8ff1" Oct 04 06:02:22 crc kubenswrapper[4802]: E1004 06:02:22.427726 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23e75031628e9c78848f6b8e859dab4428b632e0125394ecde8ad2e3d59b8ff1\": container with ID starting with 23e75031628e9c78848f6b8e859dab4428b632e0125394ecde8ad2e3d59b8ff1 not found: ID does not exist" containerID="23e75031628e9c78848f6b8e859dab4428b632e0125394ecde8ad2e3d59b8ff1" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.427764 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e75031628e9c78848f6b8e859dab4428b632e0125394ecde8ad2e3d59b8ff1"} err="failed to get container status \"23e75031628e9c78848f6b8e859dab4428b632e0125394ecde8ad2e3d59b8ff1\": rpc error: code = NotFound desc = could not find container \"23e75031628e9c78848f6b8e859dab4428b632e0125394ecde8ad2e3d59b8ff1\": container with ID starting with 23e75031628e9c78848f6b8e859dab4428b632e0125394ecde8ad2e3d59b8ff1 not found: ID does not exist" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.427791 4802 scope.go:117] "RemoveContainer" containerID="7673432261c68a37616c76f7ded8ef2385f16ab0250ee02ddcdea33381ea6ae7" Oct 04 06:02:22 crc kubenswrapper[4802]: E1004 06:02:22.428111 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7673432261c68a37616c76f7ded8ef2385f16ab0250ee02ddcdea33381ea6ae7\": container with ID starting with 7673432261c68a37616c76f7ded8ef2385f16ab0250ee02ddcdea33381ea6ae7 not found: ID does not exist" containerID="7673432261c68a37616c76f7ded8ef2385f16ab0250ee02ddcdea33381ea6ae7" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.428147 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7673432261c68a37616c76f7ded8ef2385f16ab0250ee02ddcdea33381ea6ae7"} err="failed to get container status \"7673432261c68a37616c76f7ded8ef2385f16ab0250ee02ddcdea33381ea6ae7\": rpc error: code = NotFound desc = could not find container \"7673432261c68a37616c76f7ded8ef2385f16ab0250ee02ddcdea33381ea6ae7\": container with ID starting with 7673432261c68a37616c76f7ded8ef2385f16ab0250ee02ddcdea33381ea6ae7 not found: ID does not exist" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.428167 4802 scope.go:117] "RemoveContainer" containerID="76e04e9a3e4870f2ad4a4c3e72767b37e63ff34e64f8709e05dd9958d5af7a82" Oct 04 06:02:22 crc kubenswrapper[4802]: E1004 06:02:22.428409 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e04e9a3e4870f2ad4a4c3e72767b37e63ff34e64f8709e05dd9958d5af7a82\": container with ID starting with 76e04e9a3e4870f2ad4a4c3e72767b37e63ff34e64f8709e05dd9958d5af7a82 not found: ID does not exist" containerID="76e04e9a3e4870f2ad4a4c3e72767b37e63ff34e64f8709e05dd9958d5af7a82" Oct 04 06:02:22 crc kubenswrapper[4802]: I1004 06:02:22.428429 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e04e9a3e4870f2ad4a4c3e72767b37e63ff34e64f8709e05dd9958d5af7a82"} err="failed to get container status \"76e04e9a3e4870f2ad4a4c3e72767b37e63ff34e64f8709e05dd9958d5af7a82\": rpc error: code = NotFound desc = could not find container \"76e04e9a3e4870f2ad4a4c3e72767b37e63ff34e64f8709e05dd9958d5af7a82\": container with ID starting with 76e04e9a3e4870f2ad4a4c3e72767b37e63ff34e64f8709e05dd9958d5af7a82 not found: ID does not exist" Oct 04 06:02:24 crc kubenswrapper[4802]: I1004 06:02:24.370757 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3269473b-e69a-4b11-95e8-48e512f724bb" path="/var/lib/kubelet/pods/3269473b-e69a-4b11-95e8-48e512f724bb/volumes" Oct 04 06:02:30 crc kubenswrapper[4802]: I1004 06:02:30.360527 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:02:30 crc kubenswrapper[4802]: E1004 06:02:30.361160 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.609792 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sld7j"] Oct 04 06:02:44 crc kubenswrapper[4802]: E1004 06:02:44.610884 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3269473b-e69a-4b11-95e8-48e512f724bb" containerName="extract-content" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.610902 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="3269473b-e69a-4b11-95e8-48e512f724bb" containerName="extract-content" Oct 04 06:02:44 crc kubenswrapper[4802]: E1004 06:02:44.610919 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3269473b-e69a-4b11-95e8-48e512f724bb" containerName="extract-utilities" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.610929 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="3269473b-e69a-4b11-95e8-48e512f724bb" containerName="extract-utilities" Oct 04 06:02:44 crc kubenswrapper[4802]: E1004 06:02:44.610958 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3269473b-e69a-4b11-95e8-48e512f724bb" containerName="registry-server" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.610965 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="3269473b-e69a-4b11-95e8-48e512f724bb" containerName="registry-server" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.611183 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="3269473b-e69a-4b11-95e8-48e512f724bb" containerName="registry-server" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.612849 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.624953 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sld7j"] Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.778651 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f41b23-4195-471e-bb8b-dfdabf36aa4f-utilities\") pod \"redhat-marketplace-sld7j\" (UID: \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\") " pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.778733 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f41b23-4195-471e-bb8b-dfdabf36aa4f-catalog-content\") pod \"redhat-marketplace-sld7j\" (UID: \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\") " pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.778849 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6wmr\" (UniqueName: \"kubernetes.io/projected/98f41b23-4195-471e-bb8b-dfdabf36aa4f-kube-api-access-q6wmr\") pod \"redhat-marketplace-sld7j\" (UID: \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\") " pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.880920 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f41b23-4195-471e-bb8b-dfdabf36aa4f-utilities\") pod \"redhat-marketplace-sld7j\" (UID: \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\") " pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.880983 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f41b23-4195-471e-bb8b-dfdabf36aa4f-catalog-content\") pod \"redhat-marketplace-sld7j\" (UID: \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\") " pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.881069 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6wmr\" (UniqueName: \"kubernetes.io/projected/98f41b23-4195-471e-bb8b-dfdabf36aa4f-kube-api-access-q6wmr\") pod \"redhat-marketplace-sld7j\" (UID: \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\") " pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.881532 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f41b23-4195-471e-bb8b-dfdabf36aa4f-utilities\") pod \"redhat-marketplace-sld7j\" (UID: \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\") " pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.881578 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f41b23-4195-471e-bb8b-dfdabf36aa4f-catalog-content\") pod \"redhat-marketplace-sld7j\" (UID: \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\") " pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.918564 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6wmr\" (UniqueName: \"kubernetes.io/projected/98f41b23-4195-471e-bb8b-dfdabf36aa4f-kube-api-access-q6wmr\") pod \"redhat-marketplace-sld7j\" (UID: \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\") " pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:44 crc kubenswrapper[4802]: I1004 06:02:44.936520 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:45 crc kubenswrapper[4802]: I1004 06:02:45.359904 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:02:45 crc kubenswrapper[4802]: E1004 06:02:45.360439 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:02:45 crc kubenswrapper[4802]: I1004 06:02:45.423152 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sld7j"] Oct 04 06:02:45 crc kubenswrapper[4802]: W1004 06:02:45.768768 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f41b23_4195_471e_bb8b_dfdabf36aa4f.slice/crio-1c22c9e5c5233b10c14d905f1284ac5e54a7bcb27e969f1b66fa85b3f8492497 WatchSource:0}: Error finding container 1c22c9e5c5233b10c14d905f1284ac5e54a7bcb27e969f1b66fa85b3f8492497: Status 404 returned error can't find the container with id 1c22c9e5c5233b10c14d905f1284ac5e54a7bcb27e969f1b66fa85b3f8492497 Oct 04 06:02:46 crc kubenswrapper[4802]: I1004 06:02:46.554570 4802 generic.go:334] "Generic (PLEG): container finished" podID="98f41b23-4195-471e-bb8b-dfdabf36aa4f" containerID="ffd6a67acdd1d7b53e117a85728344a25f3add29374a13a1eef7c65a2f41c03f" exitCode=0 Oct 04 06:02:46 crc kubenswrapper[4802]: I1004 06:02:46.554915 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sld7j" event={"ID":"98f41b23-4195-471e-bb8b-dfdabf36aa4f","Type":"ContainerDied","Data":"ffd6a67acdd1d7b53e117a85728344a25f3add29374a13a1eef7c65a2f41c03f"} Oct 04 06:02:46 crc kubenswrapper[4802]: I1004 06:02:46.554944 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sld7j" event={"ID":"98f41b23-4195-471e-bb8b-dfdabf36aa4f","Type":"ContainerStarted","Data":"1c22c9e5c5233b10c14d905f1284ac5e54a7bcb27e969f1b66fa85b3f8492497"} Oct 04 06:02:48 crc kubenswrapper[4802]: I1004 06:02:48.584805 4802 generic.go:334] "Generic (PLEG): container finished" podID="98f41b23-4195-471e-bb8b-dfdabf36aa4f" containerID="949bb01b9936ce9366b79fce22448874aa94accc3ac9c6dc0ebea9afab6ff83d" exitCode=0 Oct 04 06:02:48 crc kubenswrapper[4802]: I1004 06:02:48.584971 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sld7j" event={"ID":"98f41b23-4195-471e-bb8b-dfdabf36aa4f","Type":"ContainerDied","Data":"949bb01b9936ce9366b79fce22448874aa94accc3ac9c6dc0ebea9afab6ff83d"} Oct 04 06:02:49 crc kubenswrapper[4802]: I1004 06:02:49.596848 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sld7j" event={"ID":"98f41b23-4195-471e-bb8b-dfdabf36aa4f","Type":"ContainerStarted","Data":"08c533bc98a0d3b0ba08d791cd25852185a4bd7f299f627a35e7f93df5481a6b"} Oct 04 06:02:49 crc kubenswrapper[4802]: I1004 06:02:49.624732 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sld7j" podStartSLOduration=2.982105586 podStartE2EDuration="5.624714981s" podCreationTimestamp="2025-10-04 06:02:44 +0000 UTC" firstStartedPulling="2025-10-04 06:02:46.556653648 +0000 UTC m=+4608.964654283" lastFinishedPulling="2025-10-04 06:02:49.199263043 +0000 UTC m=+4611.607263678" observedRunningTime="2025-10-04 06:02:49.617994129 +0000 UTC m=+4612.025994784" watchObservedRunningTime="2025-10-04 06:02:49.624714981 +0000 UTC m=+4612.032715606" Oct 04 06:02:54 crc kubenswrapper[4802]: I1004 06:02:54.937339 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:54 crc kubenswrapper[4802]: I1004 06:02:54.939422 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:55 crc kubenswrapper[4802]: I1004 06:02:55.036582 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:55 crc kubenswrapper[4802]: I1004 06:02:55.728441 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:55 crc kubenswrapper[4802]: I1004 06:02:55.771760 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sld7j"] Oct 04 06:02:57 crc kubenswrapper[4802]: I1004 06:02:57.673258 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sld7j" podUID="98f41b23-4195-471e-bb8b-dfdabf36aa4f" containerName="registry-server" containerID="cri-o://08c533bc98a0d3b0ba08d791cd25852185a4bd7f299f627a35e7f93df5481a6b" gracePeriod=2 Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.172409 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.301882 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f41b23-4195-471e-bb8b-dfdabf36aa4f-utilities\") pod \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\" (UID: \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\") " Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.302017 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f41b23-4195-471e-bb8b-dfdabf36aa4f-catalog-content\") pod \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\" (UID: \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\") " Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.302197 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6wmr\" (UniqueName: \"kubernetes.io/projected/98f41b23-4195-471e-bb8b-dfdabf36aa4f-kube-api-access-q6wmr\") pod \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\" (UID: \"98f41b23-4195-471e-bb8b-dfdabf36aa4f\") " Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.302943 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f41b23-4195-471e-bb8b-dfdabf36aa4f-utilities" (OuterVolumeSpecName: "utilities") pod "98f41b23-4195-471e-bb8b-dfdabf36aa4f" (UID: "98f41b23-4195-471e-bb8b-dfdabf36aa4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.307439 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f41b23-4195-471e-bb8b-dfdabf36aa4f-kube-api-access-q6wmr" (OuterVolumeSpecName: "kube-api-access-q6wmr") pod "98f41b23-4195-471e-bb8b-dfdabf36aa4f" (UID: "98f41b23-4195-471e-bb8b-dfdabf36aa4f"). InnerVolumeSpecName "kube-api-access-q6wmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.316734 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f41b23-4195-471e-bb8b-dfdabf36aa4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98f41b23-4195-471e-bb8b-dfdabf36aa4f" (UID: "98f41b23-4195-471e-bb8b-dfdabf36aa4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.404256 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f41b23-4195-471e-bb8b-dfdabf36aa4f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.404622 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6wmr\" (UniqueName: \"kubernetes.io/projected/98f41b23-4195-471e-bb8b-dfdabf36aa4f-kube-api-access-q6wmr\") on node \"crc\" DevicePath \"\"" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.404653 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f41b23-4195-471e-bb8b-dfdabf36aa4f-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 06:02:58 crc kubenswrapper[4802]: E1004 06:02:58.596612 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f41b23_4195_471e_bb8b_dfdabf36aa4f.slice\": RecentStats: unable to find data in memory cache]" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.685487 4802 generic.go:334] "Generic (PLEG): container finished" podID="98f41b23-4195-471e-bb8b-dfdabf36aa4f" containerID="08c533bc98a0d3b0ba08d791cd25852185a4bd7f299f627a35e7f93df5481a6b" exitCode=0 Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.685533 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sld7j" event={"ID":"98f41b23-4195-471e-bb8b-dfdabf36aa4f","Type":"ContainerDied","Data":"08c533bc98a0d3b0ba08d791cd25852185a4bd7f299f627a35e7f93df5481a6b"} Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.685562 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sld7j" event={"ID":"98f41b23-4195-471e-bb8b-dfdabf36aa4f","Type":"ContainerDied","Data":"1c22c9e5c5233b10c14d905f1284ac5e54a7bcb27e969f1b66fa85b3f8492497"} Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.685592 4802 scope.go:117] "RemoveContainer" containerID="08c533bc98a0d3b0ba08d791cd25852185a4bd7f299f627a35e7f93df5481a6b" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.685744 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sld7j" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.714038 4802 scope.go:117] "RemoveContainer" containerID="949bb01b9936ce9366b79fce22448874aa94accc3ac9c6dc0ebea9afab6ff83d" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.715794 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sld7j"] Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.739757 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sld7j"] Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.811840 4802 scope.go:117] "RemoveContainer" containerID="ffd6a67acdd1d7b53e117a85728344a25f3add29374a13a1eef7c65a2f41c03f" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.916279 4802 scope.go:117] "RemoveContainer" containerID="08c533bc98a0d3b0ba08d791cd25852185a4bd7f299f627a35e7f93df5481a6b" Oct 04 06:02:58 crc kubenswrapper[4802]: E1004 06:02:58.917886 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c533bc98a0d3b0ba08d791cd25852185a4bd7f299f627a35e7f93df5481a6b\": container with ID starting with 08c533bc98a0d3b0ba08d791cd25852185a4bd7f299f627a35e7f93df5481a6b not found: ID does not exist" containerID="08c533bc98a0d3b0ba08d791cd25852185a4bd7f299f627a35e7f93df5481a6b" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.917941 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c533bc98a0d3b0ba08d791cd25852185a4bd7f299f627a35e7f93df5481a6b"} err="failed to get container status \"08c533bc98a0d3b0ba08d791cd25852185a4bd7f299f627a35e7f93df5481a6b\": rpc error: code = NotFound desc = could not find container \"08c533bc98a0d3b0ba08d791cd25852185a4bd7f299f627a35e7f93df5481a6b\": container with ID starting with 08c533bc98a0d3b0ba08d791cd25852185a4bd7f299f627a35e7f93df5481a6b not found: ID does not exist" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.917968 4802 scope.go:117] "RemoveContainer" containerID="949bb01b9936ce9366b79fce22448874aa94accc3ac9c6dc0ebea9afab6ff83d" Oct 04 06:02:58 crc kubenswrapper[4802]: E1004 06:02:58.918401 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"949bb01b9936ce9366b79fce22448874aa94accc3ac9c6dc0ebea9afab6ff83d\": container with ID starting with 949bb01b9936ce9366b79fce22448874aa94accc3ac9c6dc0ebea9afab6ff83d not found: ID does not exist" containerID="949bb01b9936ce9366b79fce22448874aa94accc3ac9c6dc0ebea9afab6ff83d" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.918429 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"949bb01b9936ce9366b79fce22448874aa94accc3ac9c6dc0ebea9afab6ff83d"} err="failed to get container status \"949bb01b9936ce9366b79fce22448874aa94accc3ac9c6dc0ebea9afab6ff83d\": rpc error: code = NotFound desc = could not find container \"949bb01b9936ce9366b79fce22448874aa94accc3ac9c6dc0ebea9afab6ff83d\": container with ID starting with 949bb01b9936ce9366b79fce22448874aa94accc3ac9c6dc0ebea9afab6ff83d not found: ID does not exist" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.918446 4802 scope.go:117] "RemoveContainer" containerID="ffd6a67acdd1d7b53e117a85728344a25f3add29374a13a1eef7c65a2f41c03f" Oct 04 06:02:58 crc kubenswrapper[4802]: E1004 06:02:58.918918 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd6a67acdd1d7b53e117a85728344a25f3add29374a13a1eef7c65a2f41c03f\": container with ID starting with ffd6a67acdd1d7b53e117a85728344a25f3add29374a13a1eef7c65a2f41c03f not found: ID does not exist" containerID="ffd6a67acdd1d7b53e117a85728344a25f3add29374a13a1eef7c65a2f41c03f" Oct 04 06:02:58 crc kubenswrapper[4802]: I1004 06:02:58.918947 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd6a67acdd1d7b53e117a85728344a25f3add29374a13a1eef7c65a2f41c03f"} err="failed to get container status \"ffd6a67acdd1d7b53e117a85728344a25f3add29374a13a1eef7c65a2f41c03f\": rpc error: code = NotFound desc = could not find container \"ffd6a67acdd1d7b53e117a85728344a25f3add29374a13a1eef7c65a2f41c03f\": container with ID starting with ffd6a67acdd1d7b53e117a85728344a25f3add29374a13a1eef7c65a2f41c03f not found: ID does not exist" Oct 04 06:02:59 crc kubenswrapper[4802]: I1004 06:02:59.360044 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:02:59 crc kubenswrapper[4802]: E1004 06:02:59.361167 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:03:00 crc kubenswrapper[4802]: I1004 06:03:00.370994 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f41b23-4195-471e-bb8b-dfdabf36aa4f" path="/var/lib/kubelet/pods/98f41b23-4195-471e-bb8b-dfdabf36aa4f/volumes" Oct 04 06:03:08 crc kubenswrapper[4802]: E1004 06:03:08.897179 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f41b23_4195_471e_bb8b_dfdabf36aa4f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f41b23_4195_471e_bb8b_dfdabf36aa4f.slice/crio-1c22c9e5c5233b10c14d905f1284ac5e54a7bcb27e969f1b66fa85b3f8492497\": RecentStats: unable to find data in memory cache]" Oct 04 06:03:10 crc kubenswrapper[4802]: I1004 06:03:10.359785 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:03:10 crc kubenswrapper[4802]: E1004 06:03:10.360287 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:03:19 crc kubenswrapper[4802]: E1004 06:03:19.158492 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f41b23_4195_471e_bb8b_dfdabf36aa4f.slice/crio-1c22c9e5c5233b10c14d905f1284ac5e54a7bcb27e969f1b66fa85b3f8492497\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f41b23_4195_471e_bb8b_dfdabf36aa4f.slice\": RecentStats: unable to find data in memory cache]" Oct 04 06:03:24 crc kubenswrapper[4802]: I1004 06:03:24.359970 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:03:24 crc kubenswrapper[4802]: E1004 06:03:24.360925 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:03:29 crc kubenswrapper[4802]: E1004 06:03:29.404358 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f41b23_4195_471e_bb8b_dfdabf36aa4f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f41b23_4195_471e_bb8b_dfdabf36aa4f.slice/crio-1c22c9e5c5233b10c14d905f1284ac5e54a7bcb27e969f1b66fa85b3f8492497\": RecentStats: unable to find data in memory cache]" Oct 04 06:03:35 crc kubenswrapper[4802]: I1004 06:03:35.360906 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:03:35 crc kubenswrapper[4802]: E1004 06:03:35.361815 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:03:39 crc kubenswrapper[4802]: E1004 06:03:39.679563 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f41b23_4195_471e_bb8b_dfdabf36aa4f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f41b23_4195_471e_bb8b_dfdabf36aa4f.slice/crio-1c22c9e5c5233b10c14d905f1284ac5e54a7bcb27e969f1b66fa85b3f8492497\": RecentStats: unable to find data in memory cache]" Oct 04 06:03:48 crc kubenswrapper[4802]: I1004 06:03:48.368213 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:03:48 crc kubenswrapper[4802]: E1004 06:03:48.369030 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:03:49 crc kubenswrapper[4802]: E1004 06:03:49.939077 4802 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f41b23_4195_471e_bb8b_dfdabf36aa4f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f41b23_4195_471e_bb8b_dfdabf36aa4f.slice/crio-1c22c9e5c5233b10c14d905f1284ac5e54a7bcb27e969f1b66fa85b3f8492497\": RecentStats: unable to find data in memory cache]" Oct 04 06:03:58 crc kubenswrapper[4802]: E1004 06:03:58.406313 4802 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/383d191e07ce1c151986b2f1a4ad3b122c598446af710cf5a52838b9865afec8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/383d191e07ce1c151986b2f1a4ad3b122c598446af710cf5a52838b9865afec8/diff: no such file or directory, extraDiskErr: Oct 04 06:04:00 crc kubenswrapper[4802]: I1004 06:04:00.360146 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:04:00 crc kubenswrapper[4802]: E1004 06:04:00.360883 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:04:11 crc kubenswrapper[4802]: I1004 06:04:11.360311 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:04:11 crc kubenswrapper[4802]: E1004 06:04:11.361243 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:04:22 crc kubenswrapper[4802]: I1004 06:04:22.360625 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:04:22 crc kubenswrapper[4802]: E1004 06:04:22.361414 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:04:34 crc kubenswrapper[4802]: I1004 06:04:34.360910 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:04:34 crc kubenswrapper[4802]: E1004 06:04:34.361780 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:04:45 crc kubenswrapper[4802]: I1004 06:04:45.359304 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:04:45 crc kubenswrapper[4802]: E1004 06:04:45.360041 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:04:56 crc kubenswrapper[4802]: I1004 06:04:56.359875 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:04:56 crc kubenswrapper[4802]: E1004 06:04:56.360542 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:05:11 crc kubenswrapper[4802]: I1004 06:05:11.360127 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:05:11 crc kubenswrapper[4802]: E1004 06:05:11.361143 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:05:25 crc kubenswrapper[4802]: I1004 06:05:25.360407 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:05:25 crc kubenswrapper[4802]: E1004 06:05:25.361434 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:05:36 crc kubenswrapper[4802]: I1004 06:05:36.359590 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:05:36 crc kubenswrapper[4802]: E1004 06:05:36.360277 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:05:51 crc kubenswrapper[4802]: I1004 06:05:51.360555 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:05:51 crc kubenswrapper[4802]: E1004 06:05:51.361746 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:06:05 crc kubenswrapper[4802]: I1004 06:06:05.360728 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:06:05 crc kubenswrapper[4802]: E1004 06:06:05.361625 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:06:20 crc kubenswrapper[4802]: I1004 06:06:20.360135 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:06:20 crc kubenswrapper[4802]: E1004 06:06:20.360946 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:06:34 crc kubenswrapper[4802]: I1004 06:06:34.362076 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:06:34 crc kubenswrapper[4802]: E1004 06:06:34.363033 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:06:49 crc kubenswrapper[4802]: I1004 06:06:49.360128 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:06:49 crc kubenswrapper[4802]: E1004 06:06:49.360893 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:07:00 crc kubenswrapper[4802]: I1004 06:07:00.360755 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:07:01 crc kubenswrapper[4802]: I1004 06:07:01.028996 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"079515d0024d0e4b037d97df7213d7bd4e7f1380765739c9b3a7430554df8704"} Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.700115 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k2hkj"] Oct 04 06:07:37 crc kubenswrapper[4802]: E1004 06:07:37.701035 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f41b23-4195-471e-bb8b-dfdabf36aa4f" containerName="extract-content" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.701157 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f41b23-4195-471e-bb8b-dfdabf36aa4f" containerName="extract-content" Oct 04 06:07:37 crc kubenswrapper[4802]: E1004 06:07:37.701175 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f41b23-4195-471e-bb8b-dfdabf36aa4f" containerName="registry-server" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.701183 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f41b23-4195-471e-bb8b-dfdabf36aa4f" containerName="registry-server" Oct 04 06:07:37 crc kubenswrapper[4802]: E1004 06:07:37.701220 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f41b23-4195-471e-bb8b-dfdabf36aa4f" containerName="extract-utilities" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.701227 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f41b23-4195-471e-bb8b-dfdabf36aa4f" containerName="extract-utilities" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.701424 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f41b23-4195-471e-bb8b-dfdabf36aa4f" containerName="registry-server" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.702981 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.718456 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k2hkj"] Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.746610 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-utilities\") pod \"certified-operators-k2hkj\" (UID: \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\") " pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.746726 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbk6z\" (UniqueName: \"kubernetes.io/projected/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-kube-api-access-vbk6z\") pod \"certified-operators-k2hkj\" (UID: \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\") " pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.746841 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-catalog-content\") pod \"certified-operators-k2hkj\" (UID: \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\") " pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.848331 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-utilities\") pod \"certified-operators-k2hkj\" (UID: \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\") " pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.848451 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbk6z\" (UniqueName: \"kubernetes.io/projected/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-kube-api-access-vbk6z\") pod \"certified-operators-k2hkj\" (UID: \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\") " pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.848601 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-catalog-content\") pod \"certified-operators-k2hkj\" (UID: \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\") " pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.848842 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-utilities\") pod \"certified-operators-k2hkj\" (UID: \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\") " pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.849129 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-catalog-content\") pod \"certified-operators-k2hkj\" (UID: \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\") " pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:37 crc kubenswrapper[4802]: I1004 06:07:37.871919 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbk6z\" (UniqueName: \"kubernetes.io/projected/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-kube-api-access-vbk6z\") pod \"certified-operators-k2hkj\" (UID: \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\") " pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:38 crc kubenswrapper[4802]: I1004 06:07:38.029339 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:38 crc kubenswrapper[4802]: I1004 06:07:38.648770 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k2hkj"] Oct 04 06:07:39 crc kubenswrapper[4802]: I1004 06:07:39.423087 4802 generic.go:334] "Generic (PLEG): container finished" podID="8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" containerID="3d8f91284e930cd45767cb262f2392fe7926a9b2720168c46a30f52b75f7b9dc" exitCode=0 Oct 04 06:07:39 crc kubenswrapper[4802]: I1004 06:07:39.423134 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2hkj" event={"ID":"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7","Type":"ContainerDied","Data":"3d8f91284e930cd45767cb262f2392fe7926a9b2720168c46a30f52b75f7b9dc"} Oct 04 06:07:39 crc kubenswrapper[4802]: I1004 06:07:39.423354 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2hkj" event={"ID":"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7","Type":"ContainerStarted","Data":"adae7a9534642176a5bb93ca22ac8d5623d7223f4937964db40bd9f8bcef4929"} Oct 04 06:07:39 crc kubenswrapper[4802]: I1004 06:07:39.425103 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 06:07:40 crc kubenswrapper[4802]: I1004 06:07:40.434714 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2hkj" event={"ID":"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7","Type":"ContainerStarted","Data":"7734e453ce70c5db445b601e33d6d99b7c8b3718d1fe58be0cf4b4faf2d8951f"} Oct 04 06:07:42 crc kubenswrapper[4802]: I1004 06:07:42.502884 4802 generic.go:334] "Generic (PLEG): container finished" podID="8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" containerID="7734e453ce70c5db445b601e33d6d99b7c8b3718d1fe58be0cf4b4faf2d8951f" exitCode=0 Oct 04 06:07:42 crc kubenswrapper[4802]: I1004 06:07:42.504204 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2hkj" event={"ID":"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7","Type":"ContainerDied","Data":"7734e453ce70c5db445b601e33d6d99b7c8b3718d1fe58be0cf4b4faf2d8951f"} Oct 04 06:07:43 crc kubenswrapper[4802]: I1004 06:07:43.514339 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2hkj" event={"ID":"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7","Type":"ContainerStarted","Data":"42886c73eb367d9dc41e074669c69bb65d54c34dd673a74ca296566b11ca0feb"} Oct 04 06:07:43 crc kubenswrapper[4802]: I1004 06:07:43.543529 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k2hkj" podStartSLOduration=3.023274218 podStartE2EDuration="6.543506188s" podCreationTimestamp="2025-10-04 06:07:37 +0000 UTC" firstStartedPulling="2025-10-04 06:07:39.424866023 +0000 UTC m=+4901.832866648" lastFinishedPulling="2025-10-04 06:07:42.945097993 +0000 UTC m=+4905.353098618" observedRunningTime="2025-10-04 06:07:43.537023452 +0000 UTC m=+4905.945024107" watchObservedRunningTime="2025-10-04 06:07:43.543506188 +0000 UTC m=+4905.951506813" Oct 04 06:07:48 crc kubenswrapper[4802]: I1004 06:07:48.029876 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:48 crc kubenswrapper[4802]: I1004 06:07:48.030370 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:48 crc kubenswrapper[4802]: I1004 06:07:48.615839 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:48 crc kubenswrapper[4802]: I1004 06:07:48.672425 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:49 crc kubenswrapper[4802]: I1004 06:07:49.808819 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k2hkj"] Oct 04 06:07:50 crc kubenswrapper[4802]: I1004 06:07:50.574338 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k2hkj" podUID="8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" containerName="registry-server" containerID="cri-o://42886c73eb367d9dc41e074669c69bb65d54c34dd673a74ca296566b11ca0feb" gracePeriod=2 Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.116305 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.228310 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-catalog-content\") pod \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\" (UID: \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\") " Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.228367 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbk6z\" (UniqueName: \"kubernetes.io/projected/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-kube-api-access-vbk6z\") pod \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\" (UID: \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\") " Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.228592 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-utilities\") pod \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\" (UID: \"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7\") " Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.230177 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-utilities" (OuterVolumeSpecName: "utilities") pod "8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" (UID: "8b76442f-3596-49a5-86fa-e1bfb5c6d6a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.245457 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-kube-api-access-vbk6z" (OuterVolumeSpecName: "kube-api-access-vbk6z") pod "8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" (UID: "8b76442f-3596-49a5-86fa-e1bfb5c6d6a7"). InnerVolumeSpecName "kube-api-access-vbk6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.293059 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" (UID: "8b76442f-3596-49a5-86fa-e1bfb5c6d6a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.331811 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.331843 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.331853 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbk6z\" (UniqueName: \"kubernetes.io/projected/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7-kube-api-access-vbk6z\") on node \"crc\" DevicePath \"\"" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.590354 4802 generic.go:334] "Generic (PLEG): container finished" podID="8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" containerID="42886c73eb367d9dc41e074669c69bb65d54c34dd673a74ca296566b11ca0feb" exitCode=0 Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.590404 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2hkj" event={"ID":"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7","Type":"ContainerDied","Data":"42886c73eb367d9dc41e074669c69bb65d54c34dd673a74ca296566b11ca0feb"} Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.590434 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2hkj" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.590476 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2hkj" event={"ID":"8b76442f-3596-49a5-86fa-e1bfb5c6d6a7","Type":"ContainerDied","Data":"adae7a9534642176a5bb93ca22ac8d5623d7223f4937964db40bd9f8bcef4929"} Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.590499 4802 scope.go:117] "RemoveContainer" containerID="42886c73eb367d9dc41e074669c69bb65d54c34dd673a74ca296566b11ca0feb" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.642299 4802 scope.go:117] "RemoveContainer" containerID="7734e453ce70c5db445b601e33d6d99b7c8b3718d1fe58be0cf4b4faf2d8951f" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.668432 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k2hkj"] Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.679043 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k2hkj"] Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.683371 4802 scope.go:117] "RemoveContainer" containerID="3d8f91284e930cd45767cb262f2392fe7926a9b2720168c46a30f52b75f7b9dc" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.731871 4802 scope.go:117] "RemoveContainer" containerID="42886c73eb367d9dc41e074669c69bb65d54c34dd673a74ca296566b11ca0feb" Oct 04 06:07:51 crc kubenswrapper[4802]: E1004 06:07:51.732401 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42886c73eb367d9dc41e074669c69bb65d54c34dd673a74ca296566b11ca0feb\": container with ID starting with 42886c73eb367d9dc41e074669c69bb65d54c34dd673a74ca296566b11ca0feb not found: ID does not exist" containerID="42886c73eb367d9dc41e074669c69bb65d54c34dd673a74ca296566b11ca0feb" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.732512 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42886c73eb367d9dc41e074669c69bb65d54c34dd673a74ca296566b11ca0feb"} err="failed to get container status \"42886c73eb367d9dc41e074669c69bb65d54c34dd673a74ca296566b11ca0feb\": rpc error: code = NotFound desc = could not find container \"42886c73eb367d9dc41e074669c69bb65d54c34dd673a74ca296566b11ca0feb\": container with ID starting with 42886c73eb367d9dc41e074669c69bb65d54c34dd673a74ca296566b11ca0feb not found: ID does not exist" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.732625 4802 scope.go:117] "RemoveContainer" containerID="7734e453ce70c5db445b601e33d6d99b7c8b3718d1fe58be0cf4b4faf2d8951f" Oct 04 06:07:51 crc kubenswrapper[4802]: E1004 06:07:51.733051 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7734e453ce70c5db445b601e33d6d99b7c8b3718d1fe58be0cf4b4faf2d8951f\": container with ID starting with 7734e453ce70c5db445b601e33d6d99b7c8b3718d1fe58be0cf4b4faf2d8951f not found: ID does not exist" containerID="7734e453ce70c5db445b601e33d6d99b7c8b3718d1fe58be0cf4b4faf2d8951f" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.733106 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7734e453ce70c5db445b601e33d6d99b7c8b3718d1fe58be0cf4b4faf2d8951f"} err="failed to get container status \"7734e453ce70c5db445b601e33d6d99b7c8b3718d1fe58be0cf4b4faf2d8951f\": rpc error: code = NotFound desc = could not find container \"7734e453ce70c5db445b601e33d6d99b7c8b3718d1fe58be0cf4b4faf2d8951f\": container with ID starting with 7734e453ce70c5db445b601e33d6d99b7c8b3718d1fe58be0cf4b4faf2d8951f not found: ID does not exist" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.733124 4802 scope.go:117] "RemoveContainer" containerID="3d8f91284e930cd45767cb262f2392fe7926a9b2720168c46a30f52b75f7b9dc" Oct 04 06:07:51 crc kubenswrapper[4802]: E1004 06:07:51.733416 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d8f91284e930cd45767cb262f2392fe7926a9b2720168c46a30f52b75f7b9dc\": container with ID starting with 3d8f91284e930cd45767cb262f2392fe7926a9b2720168c46a30f52b75f7b9dc not found: ID does not exist" containerID="3d8f91284e930cd45767cb262f2392fe7926a9b2720168c46a30f52b75f7b9dc" Oct 04 06:07:51 crc kubenswrapper[4802]: I1004 06:07:51.733509 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d8f91284e930cd45767cb262f2392fe7926a9b2720168c46a30f52b75f7b9dc"} err="failed to get container status \"3d8f91284e930cd45767cb262f2392fe7926a9b2720168c46a30f52b75f7b9dc\": rpc error: code = NotFound desc = could not find container \"3d8f91284e930cd45767cb262f2392fe7926a9b2720168c46a30f52b75f7b9dc\": container with ID starting with 3d8f91284e930cd45767cb262f2392fe7926a9b2720168c46a30f52b75f7b9dc not found: ID does not exist" Oct 04 06:07:52 crc kubenswrapper[4802]: I1004 06:07:52.384440 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" path="/var/lib/kubelet/pods/8b76442f-3596-49a5-86fa-e1bfb5c6d6a7/volumes" Oct 04 06:09:22 crc kubenswrapper[4802]: I1004 06:09:22.662533 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 06:09:22 crc kubenswrapper[4802]: I1004 06:09:22.663575 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.310340 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zpztc"] Oct 04 06:09:43 crc kubenswrapper[4802]: E1004 06:09:43.313564 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" containerName="extract-utilities" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.313744 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" containerName="extract-utilities" Oct 04 06:09:43 crc kubenswrapper[4802]: E1004 06:09:43.313859 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" containerName="extract-content" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.313941 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" containerName="extract-content" Oct 04 06:09:43 crc kubenswrapper[4802]: E1004 06:09:43.314025 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" containerName="registry-server" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.314103 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" containerName="registry-server" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.314385 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b76442f-3596-49a5-86fa-e1bfb5c6d6a7" containerName="registry-server" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.324254 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.351361 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zpztc"] Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.514043 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grq49\" (UniqueName: \"kubernetes.io/projected/b1b5ce91-bfc0-410e-bfaa-9921322caa93-kube-api-access-grq49\") pod \"community-operators-zpztc\" (UID: \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\") " pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.514115 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b5ce91-bfc0-410e-bfaa-9921322caa93-utilities\") pod \"community-operators-zpztc\" (UID: \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\") " pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.514236 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b5ce91-bfc0-410e-bfaa-9921322caa93-catalog-content\") pod \"community-operators-zpztc\" (UID: \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\") " pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.616500 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grq49\" (UniqueName: \"kubernetes.io/projected/b1b5ce91-bfc0-410e-bfaa-9921322caa93-kube-api-access-grq49\") pod \"community-operators-zpztc\" (UID: \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\") " pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.616956 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b5ce91-bfc0-410e-bfaa-9921322caa93-utilities\") pod \"community-operators-zpztc\" (UID: \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\") " pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.617096 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b5ce91-bfc0-410e-bfaa-9921322caa93-catalog-content\") pod \"community-operators-zpztc\" (UID: \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\") " pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.617345 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b5ce91-bfc0-410e-bfaa-9921322caa93-utilities\") pod \"community-operators-zpztc\" (UID: \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\") " pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.617553 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b5ce91-bfc0-410e-bfaa-9921322caa93-catalog-content\") pod \"community-operators-zpztc\" (UID: \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\") " pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.642780 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grq49\" (UniqueName: \"kubernetes.io/projected/b1b5ce91-bfc0-410e-bfaa-9921322caa93-kube-api-access-grq49\") pod \"community-operators-zpztc\" (UID: \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\") " pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:43 crc kubenswrapper[4802]: I1004 06:09:43.674455 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:44 crc kubenswrapper[4802]: I1004 06:09:44.254738 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zpztc"] Oct 04 06:09:44 crc kubenswrapper[4802]: I1004 06:09:44.708489 4802 generic.go:334] "Generic (PLEG): container finished" podID="b1b5ce91-bfc0-410e-bfaa-9921322caa93" containerID="e0c6262057c3cbecf6f15315ab7456808aaaa3f55ffab6781fb69b3048516654" exitCode=0 Oct 04 06:09:44 crc kubenswrapper[4802]: I1004 06:09:44.708555 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zpztc" event={"ID":"b1b5ce91-bfc0-410e-bfaa-9921322caa93","Type":"ContainerDied","Data":"e0c6262057c3cbecf6f15315ab7456808aaaa3f55ffab6781fb69b3048516654"} Oct 04 06:09:44 crc kubenswrapper[4802]: I1004 06:09:44.708598 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zpztc" event={"ID":"b1b5ce91-bfc0-410e-bfaa-9921322caa93","Type":"ContainerStarted","Data":"35ea615c39333179ed3fadaf6ef832ab2700cc85d2c5816c04bc275a7d188d83"} Oct 04 06:09:46 crc kubenswrapper[4802]: I1004 06:09:46.731966 4802 generic.go:334] "Generic (PLEG): container finished" podID="b1b5ce91-bfc0-410e-bfaa-9921322caa93" containerID="32c4670000b51ccab301dc855d19e6862978f2c1462e89f8b5867ddc597c72af" exitCode=0 Oct 04 06:09:46 crc kubenswrapper[4802]: I1004 06:09:46.732030 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zpztc" event={"ID":"b1b5ce91-bfc0-410e-bfaa-9921322caa93","Type":"ContainerDied","Data":"32c4670000b51ccab301dc855d19e6862978f2c1462e89f8b5867ddc597c72af"} Oct 04 06:09:48 crc kubenswrapper[4802]: I1004 06:09:48.756862 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zpztc" event={"ID":"b1b5ce91-bfc0-410e-bfaa-9921322caa93","Type":"ContainerStarted","Data":"ead15dd882ef69d67d6a2a46165171e2ca18310f6c9b7e7dc8e5bc39db88db8c"} Oct 04 06:09:48 crc kubenswrapper[4802]: I1004 06:09:48.778385 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zpztc" podStartSLOduration=2.606146942 podStartE2EDuration="5.778362477s" podCreationTimestamp="2025-10-04 06:09:43 +0000 UTC" firstStartedPulling="2025-10-04 06:09:44.710930278 +0000 UTC m=+5027.118930913" lastFinishedPulling="2025-10-04 06:09:47.883145803 +0000 UTC m=+5030.291146448" observedRunningTime="2025-10-04 06:09:48.777743539 +0000 UTC m=+5031.185744164" watchObservedRunningTime="2025-10-04 06:09:48.778362477 +0000 UTC m=+5031.186363102" Oct 04 06:09:52 crc kubenswrapper[4802]: I1004 06:09:52.662685 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 06:09:52 crc kubenswrapper[4802]: I1004 06:09:52.663334 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 06:09:53 crc kubenswrapper[4802]: I1004 06:09:53.674860 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:53 crc kubenswrapper[4802]: I1004 06:09:53.675247 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:53 crc kubenswrapper[4802]: I1004 06:09:53.759837 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:53 crc kubenswrapper[4802]: I1004 06:09:53.894735 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:54 crc kubenswrapper[4802]: I1004 06:09:54.031989 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zpztc"] Oct 04 06:09:55 crc kubenswrapper[4802]: I1004 06:09:55.830047 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zpztc" podUID="b1b5ce91-bfc0-410e-bfaa-9921322caa93" containerName="registry-server" containerID="cri-o://ead15dd882ef69d67d6a2a46165171e2ca18310f6c9b7e7dc8e5bc39db88db8c" gracePeriod=2 Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.430345 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.600946 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b5ce91-bfc0-410e-bfaa-9921322caa93-utilities\") pod \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\" (UID: \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\") " Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.601114 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grq49\" (UniqueName: \"kubernetes.io/projected/b1b5ce91-bfc0-410e-bfaa-9921322caa93-kube-api-access-grq49\") pod \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\" (UID: \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\") " Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.601175 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b5ce91-bfc0-410e-bfaa-9921322caa93-catalog-content\") pod \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\" (UID: \"b1b5ce91-bfc0-410e-bfaa-9921322caa93\") " Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.602343 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b5ce91-bfc0-410e-bfaa-9921322caa93-utilities" (OuterVolumeSpecName: "utilities") pod "b1b5ce91-bfc0-410e-bfaa-9921322caa93" (UID: "b1b5ce91-bfc0-410e-bfaa-9921322caa93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.607572 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b5ce91-bfc0-410e-bfaa-9921322caa93-kube-api-access-grq49" (OuterVolumeSpecName: "kube-api-access-grq49") pod "b1b5ce91-bfc0-410e-bfaa-9921322caa93" (UID: "b1b5ce91-bfc0-410e-bfaa-9921322caa93"). InnerVolumeSpecName "kube-api-access-grq49". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.687931 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b5ce91-bfc0-410e-bfaa-9921322caa93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1b5ce91-bfc0-410e-bfaa-9921322caa93" (UID: "b1b5ce91-bfc0-410e-bfaa-9921322caa93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.703603 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grq49\" (UniqueName: \"kubernetes.io/projected/b1b5ce91-bfc0-410e-bfaa-9921322caa93-kube-api-access-grq49\") on node \"crc\" DevicePath \"\"" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.703664 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b5ce91-bfc0-410e-bfaa-9921322caa93-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.703681 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b5ce91-bfc0-410e-bfaa-9921322caa93-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.841080 4802 generic.go:334] "Generic (PLEG): container finished" podID="b1b5ce91-bfc0-410e-bfaa-9921322caa93" containerID="ead15dd882ef69d67d6a2a46165171e2ca18310f6c9b7e7dc8e5bc39db88db8c" exitCode=0 Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.841171 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zpztc" event={"ID":"b1b5ce91-bfc0-410e-bfaa-9921322caa93","Type":"ContainerDied","Data":"ead15dd882ef69d67d6a2a46165171e2ca18310f6c9b7e7dc8e5bc39db88db8c"} Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.841628 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zpztc" event={"ID":"b1b5ce91-bfc0-410e-bfaa-9921322caa93","Type":"ContainerDied","Data":"35ea615c39333179ed3fadaf6ef832ab2700cc85d2c5816c04bc275a7d188d83"} Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.841720 4802 scope.go:117] "RemoveContainer" containerID="ead15dd882ef69d67d6a2a46165171e2ca18310f6c9b7e7dc8e5bc39db88db8c" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.841192 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zpztc" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.873585 4802 scope.go:117] "RemoveContainer" containerID="32c4670000b51ccab301dc855d19e6862978f2c1462e89f8b5867ddc597c72af" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.883047 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zpztc"] Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.892014 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zpztc"] Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.904211 4802 scope.go:117] "RemoveContainer" containerID="e0c6262057c3cbecf6f15315ab7456808aaaa3f55ffab6781fb69b3048516654" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.933917 4802 scope.go:117] "RemoveContainer" containerID="ead15dd882ef69d67d6a2a46165171e2ca18310f6c9b7e7dc8e5bc39db88db8c" Oct 04 06:09:56 crc kubenswrapper[4802]: E1004 06:09:56.934441 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ead15dd882ef69d67d6a2a46165171e2ca18310f6c9b7e7dc8e5bc39db88db8c\": container with ID starting with ead15dd882ef69d67d6a2a46165171e2ca18310f6c9b7e7dc8e5bc39db88db8c not found: ID does not exist" containerID="ead15dd882ef69d67d6a2a46165171e2ca18310f6c9b7e7dc8e5bc39db88db8c" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.934575 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ead15dd882ef69d67d6a2a46165171e2ca18310f6c9b7e7dc8e5bc39db88db8c"} err="failed to get container status \"ead15dd882ef69d67d6a2a46165171e2ca18310f6c9b7e7dc8e5bc39db88db8c\": rpc error: code = NotFound desc = could not find container \"ead15dd882ef69d67d6a2a46165171e2ca18310f6c9b7e7dc8e5bc39db88db8c\": container with ID starting with ead15dd882ef69d67d6a2a46165171e2ca18310f6c9b7e7dc8e5bc39db88db8c not found: ID does not exist" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.934724 4802 scope.go:117] "RemoveContainer" containerID="32c4670000b51ccab301dc855d19e6862978f2c1462e89f8b5867ddc597c72af" Oct 04 06:09:56 crc kubenswrapper[4802]: E1004 06:09:56.935231 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c4670000b51ccab301dc855d19e6862978f2c1462e89f8b5867ddc597c72af\": container with ID starting with 32c4670000b51ccab301dc855d19e6862978f2c1462e89f8b5867ddc597c72af not found: ID does not exist" containerID="32c4670000b51ccab301dc855d19e6862978f2c1462e89f8b5867ddc597c72af" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.935365 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c4670000b51ccab301dc855d19e6862978f2c1462e89f8b5867ddc597c72af"} err="failed to get container status \"32c4670000b51ccab301dc855d19e6862978f2c1462e89f8b5867ddc597c72af\": rpc error: code = NotFound desc = could not find container \"32c4670000b51ccab301dc855d19e6862978f2c1462e89f8b5867ddc597c72af\": container with ID starting with 32c4670000b51ccab301dc855d19e6862978f2c1462e89f8b5867ddc597c72af not found: ID does not exist" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.935485 4802 scope.go:117] "RemoveContainer" containerID="e0c6262057c3cbecf6f15315ab7456808aaaa3f55ffab6781fb69b3048516654" Oct 04 06:09:56 crc kubenswrapper[4802]: E1004 06:09:56.935869 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0c6262057c3cbecf6f15315ab7456808aaaa3f55ffab6781fb69b3048516654\": container with ID starting with e0c6262057c3cbecf6f15315ab7456808aaaa3f55ffab6781fb69b3048516654 not found: ID does not exist" containerID="e0c6262057c3cbecf6f15315ab7456808aaaa3f55ffab6781fb69b3048516654" Oct 04 06:09:56 crc kubenswrapper[4802]: I1004 06:09:56.935912 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c6262057c3cbecf6f15315ab7456808aaaa3f55ffab6781fb69b3048516654"} err="failed to get container status \"e0c6262057c3cbecf6f15315ab7456808aaaa3f55ffab6781fb69b3048516654\": rpc error: code = NotFound desc = could not find container \"e0c6262057c3cbecf6f15315ab7456808aaaa3f55ffab6781fb69b3048516654\": container with ID starting with e0c6262057c3cbecf6f15315ab7456808aaaa3f55ffab6781fb69b3048516654 not found: ID does not exist" Oct 04 06:09:58 crc kubenswrapper[4802]: I1004 06:09:58.384297 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b5ce91-bfc0-410e-bfaa-9921322caa93" path="/var/lib/kubelet/pods/b1b5ce91-bfc0-410e-bfaa-9921322caa93/volumes" Oct 04 06:10:22 crc kubenswrapper[4802]: I1004 06:10:22.662464 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 06:10:22 crc kubenswrapper[4802]: I1004 06:10:22.663250 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 06:10:22 crc kubenswrapper[4802]: I1004 06:10:22.663306 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 06:10:22 crc kubenswrapper[4802]: I1004 06:10:22.664179 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"079515d0024d0e4b037d97df7213d7bd4e7f1380765739c9b3a7430554df8704"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 06:10:22 crc kubenswrapper[4802]: I1004 06:10:22.664272 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://079515d0024d0e4b037d97df7213d7bd4e7f1380765739c9b3a7430554df8704" gracePeriod=600 Oct 04 06:10:23 crc kubenswrapper[4802]: I1004 06:10:23.122447 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="079515d0024d0e4b037d97df7213d7bd4e7f1380765739c9b3a7430554df8704" exitCode=0 Oct 04 06:10:23 crc kubenswrapper[4802]: I1004 06:10:23.122521 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"079515d0024d0e4b037d97df7213d7bd4e7f1380765739c9b3a7430554df8704"} Oct 04 06:10:23 crc kubenswrapper[4802]: I1004 06:10:23.122869 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7"} Oct 04 06:10:23 crc kubenswrapper[4802]: I1004 06:10:23.122899 4802 scope.go:117] "RemoveContainer" containerID="634f557cd2e8b2272b5f445d1b335faf09c72e58791b3ffa66d264db4b8171ce" Oct 04 06:12:52 crc kubenswrapper[4802]: I1004 06:12:52.663009 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 06:12:52 crc kubenswrapper[4802]: I1004 06:12:52.663586 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.183391 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pxl4j"] Oct 04 06:12:56 crc kubenswrapper[4802]: E1004 06:12:56.184569 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b5ce91-bfc0-410e-bfaa-9921322caa93" containerName="registry-server" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.184591 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b5ce91-bfc0-410e-bfaa-9921322caa93" containerName="registry-server" Oct 04 06:12:56 crc kubenswrapper[4802]: E1004 06:12:56.184627 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b5ce91-bfc0-410e-bfaa-9921322caa93" containerName="extract-utilities" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.184638 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b5ce91-bfc0-410e-bfaa-9921322caa93" containerName="extract-utilities" Oct 04 06:12:56 crc kubenswrapper[4802]: E1004 06:12:56.184691 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b5ce91-bfc0-410e-bfaa-9921322caa93" containerName="extract-content" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.184704 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b5ce91-bfc0-410e-bfaa-9921322caa93" containerName="extract-content" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.185003 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b5ce91-bfc0-410e-bfaa-9921322caa93" containerName="registry-server" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.187179 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.209968 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pxl4j"] Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.282266 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7mpg\" (UniqueName: \"kubernetes.io/projected/94078128-cc6e-4b32-807f-5cbfce6c132b-kube-api-access-k7mpg\") pod \"redhat-operators-pxl4j\" (UID: \"94078128-cc6e-4b32-807f-5cbfce6c132b\") " pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.282339 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94078128-cc6e-4b32-807f-5cbfce6c132b-catalog-content\") pod \"redhat-operators-pxl4j\" (UID: \"94078128-cc6e-4b32-807f-5cbfce6c132b\") " pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.282464 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94078128-cc6e-4b32-807f-5cbfce6c132b-utilities\") pod \"redhat-operators-pxl4j\" (UID: \"94078128-cc6e-4b32-807f-5cbfce6c132b\") " pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.383837 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7mpg\" (UniqueName: \"kubernetes.io/projected/94078128-cc6e-4b32-807f-5cbfce6c132b-kube-api-access-k7mpg\") pod \"redhat-operators-pxl4j\" (UID: \"94078128-cc6e-4b32-807f-5cbfce6c132b\") " pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.384122 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94078128-cc6e-4b32-807f-5cbfce6c132b-catalog-content\") pod \"redhat-operators-pxl4j\" (UID: \"94078128-cc6e-4b32-807f-5cbfce6c132b\") " pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.384269 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94078128-cc6e-4b32-807f-5cbfce6c132b-utilities\") pod \"redhat-operators-pxl4j\" (UID: \"94078128-cc6e-4b32-807f-5cbfce6c132b\") " pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.384836 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94078128-cc6e-4b32-807f-5cbfce6c132b-catalog-content\") pod \"redhat-operators-pxl4j\" (UID: \"94078128-cc6e-4b32-807f-5cbfce6c132b\") " pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.384856 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94078128-cc6e-4b32-807f-5cbfce6c132b-utilities\") pod \"redhat-operators-pxl4j\" (UID: \"94078128-cc6e-4b32-807f-5cbfce6c132b\") " pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.418799 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7mpg\" (UniqueName: \"kubernetes.io/projected/94078128-cc6e-4b32-807f-5cbfce6c132b-kube-api-access-k7mpg\") pod \"redhat-operators-pxl4j\" (UID: \"94078128-cc6e-4b32-807f-5cbfce6c132b\") " pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.521577 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:12:56 crc kubenswrapper[4802]: I1004 06:12:56.975816 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pxl4j"] Oct 04 06:12:57 crc kubenswrapper[4802]: I1004 06:12:57.849239 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxl4j" event={"ID":"94078128-cc6e-4b32-807f-5cbfce6c132b","Type":"ContainerStarted","Data":"0a9c2483084979cd9fcacb009ba85a60003a5fc174f95cb48a9989dac3ab8e75"} Oct 04 06:12:57 crc kubenswrapper[4802]: I1004 06:12:57.849929 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxl4j" event={"ID":"94078128-cc6e-4b32-807f-5cbfce6c132b","Type":"ContainerStarted","Data":"2548bd8aac7441cba3d91fdc2292b6d331e4e79a4ddff14491937ab888ec6b44"} Oct 04 06:12:58 crc kubenswrapper[4802]: I1004 06:12:58.864826 4802 generic.go:334] "Generic (PLEG): container finished" podID="94078128-cc6e-4b32-807f-5cbfce6c132b" containerID="0a9c2483084979cd9fcacb009ba85a60003a5fc174f95cb48a9989dac3ab8e75" exitCode=0 Oct 04 06:12:58 crc kubenswrapper[4802]: I1004 06:12:58.864894 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxl4j" event={"ID":"94078128-cc6e-4b32-807f-5cbfce6c132b","Type":"ContainerDied","Data":"0a9c2483084979cd9fcacb009ba85a60003a5fc174f95cb48a9989dac3ab8e75"} Oct 04 06:12:58 crc kubenswrapper[4802]: I1004 06:12:58.870334 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 06:12:59 crc kubenswrapper[4802]: I1004 06:12:59.876171 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxl4j" event={"ID":"94078128-cc6e-4b32-807f-5cbfce6c132b","Type":"ContainerStarted","Data":"8818d030a02766057350664b50126c3f6a5bfcdc78bc00de0fd04eaad923b6c9"} Oct 04 06:13:02 crc kubenswrapper[4802]: I1004 06:13:02.909307 4802 generic.go:334] "Generic (PLEG): container finished" podID="94078128-cc6e-4b32-807f-5cbfce6c132b" containerID="8818d030a02766057350664b50126c3f6a5bfcdc78bc00de0fd04eaad923b6c9" exitCode=0 Oct 04 06:13:02 crc kubenswrapper[4802]: I1004 06:13:02.909402 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxl4j" event={"ID":"94078128-cc6e-4b32-807f-5cbfce6c132b","Type":"ContainerDied","Data":"8818d030a02766057350664b50126c3f6a5bfcdc78bc00de0fd04eaad923b6c9"} Oct 04 06:13:03 crc kubenswrapper[4802]: I1004 06:13:03.921238 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxl4j" event={"ID":"94078128-cc6e-4b32-807f-5cbfce6c132b","Type":"ContainerStarted","Data":"6d51b3e3656579c3eea97456f65ecc00189a8a890cb38f057772dc425344c47e"} Oct 04 06:13:03 crc kubenswrapper[4802]: I1004 06:13:03.964076 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pxl4j" podStartSLOduration=3.272017658 podStartE2EDuration="7.964054953s" podCreationTimestamp="2025-10-04 06:12:56 +0000 UTC" firstStartedPulling="2025-10-04 06:12:58.869203994 +0000 UTC m=+5221.277204659" lastFinishedPulling="2025-10-04 06:13:03.561241329 +0000 UTC m=+5225.969241954" observedRunningTime="2025-10-04 06:13:03.947216241 +0000 UTC m=+5226.355216866" watchObservedRunningTime="2025-10-04 06:13:03.964054953 +0000 UTC m=+5226.372055578" Oct 04 06:13:06 crc kubenswrapper[4802]: I1004 06:13:06.521729 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:13:06 crc kubenswrapper[4802]: I1004 06:13:06.522127 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:13:07 crc kubenswrapper[4802]: I1004 06:13:07.609529 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pxl4j" podUID="94078128-cc6e-4b32-807f-5cbfce6c132b" containerName="registry-server" probeResult="failure" output=< Oct 04 06:13:07 crc kubenswrapper[4802]: timeout: failed to connect service ":50051" within 1s Oct 04 06:13:07 crc kubenswrapper[4802]: > Oct 04 06:13:16 crc kubenswrapper[4802]: I1004 06:13:16.581940 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:13:16 crc kubenswrapper[4802]: I1004 06:13:16.656305 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:13:16 crc kubenswrapper[4802]: I1004 06:13:16.836712 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pxl4j"] Oct 04 06:13:18 crc kubenswrapper[4802]: I1004 06:13:18.064529 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pxl4j" podUID="94078128-cc6e-4b32-807f-5cbfce6c132b" containerName="registry-server" containerID="cri-o://6d51b3e3656579c3eea97456f65ecc00189a8a890cb38f057772dc425344c47e" gracePeriod=2 Oct 04 06:13:18 crc kubenswrapper[4802]: I1004 06:13:18.660170 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:13:18 crc kubenswrapper[4802]: I1004 06:13:18.757575 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94078128-cc6e-4b32-807f-5cbfce6c132b-catalog-content\") pod \"94078128-cc6e-4b32-807f-5cbfce6c132b\" (UID: \"94078128-cc6e-4b32-807f-5cbfce6c132b\") " Oct 04 06:13:18 crc kubenswrapper[4802]: I1004 06:13:18.757866 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7mpg\" (UniqueName: \"kubernetes.io/projected/94078128-cc6e-4b32-807f-5cbfce6c132b-kube-api-access-k7mpg\") pod \"94078128-cc6e-4b32-807f-5cbfce6c132b\" (UID: \"94078128-cc6e-4b32-807f-5cbfce6c132b\") " Oct 04 06:13:18 crc kubenswrapper[4802]: I1004 06:13:18.757984 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94078128-cc6e-4b32-807f-5cbfce6c132b-utilities\") pod \"94078128-cc6e-4b32-807f-5cbfce6c132b\" (UID: \"94078128-cc6e-4b32-807f-5cbfce6c132b\") " Oct 04 06:13:18 crc kubenswrapper[4802]: I1004 06:13:18.759702 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94078128-cc6e-4b32-807f-5cbfce6c132b-utilities" (OuterVolumeSpecName: "utilities") pod "94078128-cc6e-4b32-807f-5cbfce6c132b" (UID: "94078128-cc6e-4b32-807f-5cbfce6c132b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:13:18 crc kubenswrapper[4802]: I1004 06:13:18.772817 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94078128-cc6e-4b32-807f-5cbfce6c132b-kube-api-access-k7mpg" (OuterVolumeSpecName: "kube-api-access-k7mpg") pod "94078128-cc6e-4b32-807f-5cbfce6c132b" (UID: "94078128-cc6e-4b32-807f-5cbfce6c132b"). InnerVolumeSpecName "kube-api-access-k7mpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:13:18 crc kubenswrapper[4802]: I1004 06:13:18.840471 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94078128-cc6e-4b32-807f-5cbfce6c132b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94078128-cc6e-4b32-807f-5cbfce6c132b" (UID: "94078128-cc6e-4b32-807f-5cbfce6c132b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:13:18 crc kubenswrapper[4802]: I1004 06:13:18.860462 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94078128-cc6e-4b32-807f-5cbfce6c132b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 06:13:18 crc kubenswrapper[4802]: I1004 06:13:18.860485 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7mpg\" (UniqueName: \"kubernetes.io/projected/94078128-cc6e-4b32-807f-5cbfce6c132b-kube-api-access-k7mpg\") on node \"crc\" DevicePath \"\"" Oct 04 06:13:18 crc kubenswrapper[4802]: I1004 06:13:18.860494 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94078128-cc6e-4b32-807f-5cbfce6c132b-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.077581 4802 generic.go:334] "Generic (PLEG): container finished" podID="94078128-cc6e-4b32-807f-5cbfce6c132b" containerID="6d51b3e3656579c3eea97456f65ecc00189a8a890cb38f057772dc425344c47e" exitCode=0 Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.077634 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxl4j" event={"ID":"94078128-cc6e-4b32-807f-5cbfce6c132b","Type":"ContainerDied","Data":"6d51b3e3656579c3eea97456f65ecc00189a8a890cb38f057772dc425344c47e"} Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.077890 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pxl4j" event={"ID":"94078128-cc6e-4b32-807f-5cbfce6c132b","Type":"ContainerDied","Data":"2548bd8aac7441cba3d91fdc2292b6d331e4e79a4ddff14491937ab888ec6b44"} Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.077909 4802 scope.go:117] "RemoveContainer" containerID="6d51b3e3656579c3eea97456f65ecc00189a8a890cb38f057772dc425344c47e" Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.077762 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pxl4j" Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.112932 4802 scope.go:117] "RemoveContainer" containerID="8818d030a02766057350664b50126c3f6a5bfcdc78bc00de0fd04eaad923b6c9" Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.140959 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pxl4j"] Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.161812 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pxl4j"] Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.162455 4802 scope.go:117] "RemoveContainer" containerID="0a9c2483084979cd9fcacb009ba85a60003a5fc174f95cb48a9989dac3ab8e75" Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.216086 4802 scope.go:117] "RemoveContainer" containerID="6d51b3e3656579c3eea97456f65ecc00189a8a890cb38f057772dc425344c47e" Oct 04 06:13:19 crc kubenswrapper[4802]: E1004 06:13:19.218551 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d51b3e3656579c3eea97456f65ecc00189a8a890cb38f057772dc425344c47e\": container with ID starting with 6d51b3e3656579c3eea97456f65ecc00189a8a890cb38f057772dc425344c47e not found: ID does not exist" containerID="6d51b3e3656579c3eea97456f65ecc00189a8a890cb38f057772dc425344c47e" Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.218620 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d51b3e3656579c3eea97456f65ecc00189a8a890cb38f057772dc425344c47e"} err="failed to get container status \"6d51b3e3656579c3eea97456f65ecc00189a8a890cb38f057772dc425344c47e\": rpc error: code = NotFound desc = could not find container \"6d51b3e3656579c3eea97456f65ecc00189a8a890cb38f057772dc425344c47e\": container with ID starting with 6d51b3e3656579c3eea97456f65ecc00189a8a890cb38f057772dc425344c47e not found: ID does not exist" Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.218708 4802 scope.go:117] "RemoveContainer" containerID="8818d030a02766057350664b50126c3f6a5bfcdc78bc00de0fd04eaad923b6c9" Oct 04 06:13:19 crc kubenswrapper[4802]: E1004 06:13:19.220030 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8818d030a02766057350664b50126c3f6a5bfcdc78bc00de0fd04eaad923b6c9\": container with ID starting with 8818d030a02766057350664b50126c3f6a5bfcdc78bc00de0fd04eaad923b6c9 not found: ID does not exist" containerID="8818d030a02766057350664b50126c3f6a5bfcdc78bc00de0fd04eaad923b6c9" Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.220091 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8818d030a02766057350664b50126c3f6a5bfcdc78bc00de0fd04eaad923b6c9"} err="failed to get container status \"8818d030a02766057350664b50126c3f6a5bfcdc78bc00de0fd04eaad923b6c9\": rpc error: code = NotFound desc = could not find container \"8818d030a02766057350664b50126c3f6a5bfcdc78bc00de0fd04eaad923b6c9\": container with ID starting with 8818d030a02766057350664b50126c3f6a5bfcdc78bc00de0fd04eaad923b6c9 not found: ID does not exist" Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.220129 4802 scope.go:117] "RemoveContainer" containerID="0a9c2483084979cd9fcacb009ba85a60003a5fc174f95cb48a9989dac3ab8e75" Oct 04 06:13:19 crc kubenswrapper[4802]: E1004 06:13:19.221503 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a9c2483084979cd9fcacb009ba85a60003a5fc174f95cb48a9989dac3ab8e75\": container with ID starting with 0a9c2483084979cd9fcacb009ba85a60003a5fc174f95cb48a9989dac3ab8e75 not found: ID does not exist" containerID="0a9c2483084979cd9fcacb009ba85a60003a5fc174f95cb48a9989dac3ab8e75" Oct 04 06:13:19 crc kubenswrapper[4802]: I1004 06:13:19.221569 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a9c2483084979cd9fcacb009ba85a60003a5fc174f95cb48a9989dac3ab8e75"} err="failed to get container status \"0a9c2483084979cd9fcacb009ba85a60003a5fc174f95cb48a9989dac3ab8e75\": rpc error: code = NotFound desc = could not find container \"0a9c2483084979cd9fcacb009ba85a60003a5fc174f95cb48a9989dac3ab8e75\": container with ID starting with 0a9c2483084979cd9fcacb009ba85a60003a5fc174f95cb48a9989dac3ab8e75 not found: ID does not exist" Oct 04 06:13:20 crc kubenswrapper[4802]: I1004 06:13:20.382346 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94078128-cc6e-4b32-807f-5cbfce6c132b" path="/var/lib/kubelet/pods/94078128-cc6e-4b32-807f-5cbfce6c132b/volumes" Oct 04 06:13:22 crc kubenswrapper[4802]: I1004 06:13:22.662580 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 06:13:22 crc kubenswrapper[4802]: I1004 06:13:22.663468 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 06:13:52 crc kubenswrapper[4802]: I1004 06:13:52.662543 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 06:13:52 crc kubenswrapper[4802]: I1004 06:13:52.663327 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 06:13:52 crc kubenswrapper[4802]: I1004 06:13:52.663387 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 06:13:52 crc kubenswrapper[4802]: I1004 06:13:52.664477 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 06:13:52 crc kubenswrapper[4802]: I1004 06:13:52.664596 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" gracePeriod=600 Oct 04 06:13:53 crc kubenswrapper[4802]: E1004 06:13:53.234485 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:13:53 crc kubenswrapper[4802]: I1004 06:13:53.534384 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" exitCode=0 Oct 04 06:13:53 crc kubenswrapper[4802]: I1004 06:13:53.534941 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7"} Oct 04 06:13:53 crc kubenswrapper[4802]: I1004 06:13:53.535019 4802 scope.go:117] "RemoveContainer" containerID="079515d0024d0e4b037d97df7213d7bd4e7f1380765739c9b3a7430554df8704" Oct 04 06:13:53 crc kubenswrapper[4802]: I1004 06:13:53.536360 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:13:53 crc kubenswrapper[4802]: E1004 06:13:53.536973 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:14:05 crc kubenswrapper[4802]: I1004 06:14:05.359473 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:14:05 crc kubenswrapper[4802]: E1004 06:14:05.360291 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:14:19 crc kubenswrapper[4802]: I1004 06:14:19.359549 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:14:19 crc kubenswrapper[4802]: E1004 06:14:19.360403 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:14:24 crc kubenswrapper[4802]: I1004 06:14:24.885734 4802 generic.go:334] "Generic (PLEG): container finished" podID="9ff51956-c2e9-4e25-9cd4-56bb6304b7db" containerID="af116b74e694deacf7e2df1ef4a80cc45162275738010ba992f4280448ef0a59" exitCode=1 Oct 04 06:14:24 crc kubenswrapper[4802]: I1004 06:14:24.885784 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9ff51956-c2e9-4e25-9cd4-56bb6304b7db","Type":"ContainerDied","Data":"af116b74e694deacf7e2df1ef4a80cc45162275738010ba992f4280448ef0a59"} Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.387875 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.483108 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-openstack-config-secret\") pod \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.483218 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-test-operator-ephemeral-workdir\") pod \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.483273 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.483318 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r6h6\" (UniqueName: \"kubernetes.io/projected/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-kube-api-access-9r6h6\") pod \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.483369 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-ca-certs\") pod \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.483563 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-openstack-config\") pod \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.483678 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-config-data\") pod \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.483791 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-ssh-key\") pod \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.483859 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-test-operator-ephemeral-temporary\") pod \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\" (UID: \"9ff51956-c2e9-4e25-9cd4-56bb6304b7db\") " Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.488880 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-config-data" (OuterVolumeSpecName: "config-data") pod "9ff51956-c2e9-4e25-9cd4-56bb6304b7db" (UID: "9ff51956-c2e9-4e25-9cd4-56bb6304b7db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.489463 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "9ff51956-c2e9-4e25-9cd4-56bb6304b7db" (UID: "9ff51956-c2e9-4e25-9cd4-56bb6304b7db"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.492337 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "test-operator-logs") pod "9ff51956-c2e9-4e25-9cd4-56bb6304b7db" (UID: "9ff51956-c2e9-4e25-9cd4-56bb6304b7db"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.500791 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "9ff51956-c2e9-4e25-9cd4-56bb6304b7db" (UID: "9ff51956-c2e9-4e25-9cd4-56bb6304b7db"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.504454 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-kube-api-access-9r6h6" (OuterVolumeSpecName: "kube-api-access-9r6h6") pod "9ff51956-c2e9-4e25-9cd4-56bb6304b7db" (UID: "9ff51956-c2e9-4e25-9cd4-56bb6304b7db"). InnerVolumeSpecName "kube-api-access-9r6h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.514379 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9ff51956-c2e9-4e25-9cd4-56bb6304b7db" (UID: "9ff51956-c2e9-4e25-9cd4-56bb6304b7db"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.515898 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "9ff51956-c2e9-4e25-9cd4-56bb6304b7db" (UID: "9ff51956-c2e9-4e25-9cd4-56bb6304b7db"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.525508 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9ff51956-c2e9-4e25-9cd4-56bb6304b7db" (UID: "9ff51956-c2e9-4e25-9cd4-56bb6304b7db"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.580041 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9ff51956-c2e9-4e25-9cd4-56bb6304b7db" (UID: "9ff51956-c2e9-4e25-9cd4-56bb6304b7db"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.586572 4802 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.586605 4802 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.586657 4802 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.586676 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r6h6\" (UniqueName: \"kubernetes.io/projected/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-kube-api-access-9r6h6\") on node \"crc\" DevicePath \"\"" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.586686 4802 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.586695 4802 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.586704 4802 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-config-data\") on node \"crc\" DevicePath \"\"" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.586711 4802 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.586720 4802 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9ff51956-c2e9-4e25-9cd4-56bb6304b7db-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.606372 4802 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.688889 4802 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.907815 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9ff51956-c2e9-4e25-9cd4-56bb6304b7db","Type":"ContainerDied","Data":"473717d0204abce557f70251bd6448bff07dcde8b840e8724d6db7a521d92efb"} Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.907859 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="473717d0204abce557f70251bd6448bff07dcde8b840e8724d6db7a521d92efb" Oct 04 06:14:26 crc kubenswrapper[4802]: I1004 06:14:26.908101 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 04 06:14:34 crc kubenswrapper[4802]: I1004 06:14:34.360146 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:14:34 crc kubenswrapper[4802]: E1004 06:14:34.361555 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.718451 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 04 06:14:36 crc kubenswrapper[4802]: E1004 06:14:36.719525 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff51956-c2e9-4e25-9cd4-56bb6304b7db" containerName="tempest-tests-tempest-tests-runner" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.719540 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff51956-c2e9-4e25-9cd4-56bb6304b7db" containerName="tempest-tests-tempest-tests-runner" Oct 04 06:14:36 crc kubenswrapper[4802]: E1004 06:14:36.719556 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94078128-cc6e-4b32-807f-5cbfce6c132b" containerName="registry-server" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.719562 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="94078128-cc6e-4b32-807f-5cbfce6c132b" containerName="registry-server" Oct 04 06:14:36 crc kubenswrapper[4802]: E1004 06:14:36.719578 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94078128-cc6e-4b32-807f-5cbfce6c132b" containerName="extract-utilities" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.719585 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="94078128-cc6e-4b32-807f-5cbfce6c132b" containerName="extract-utilities" Oct 04 06:14:36 crc kubenswrapper[4802]: E1004 06:14:36.719600 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94078128-cc6e-4b32-807f-5cbfce6c132b" containerName="extract-content" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.719606 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="94078128-cc6e-4b32-807f-5cbfce6c132b" containerName="extract-content" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.719838 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="94078128-cc6e-4b32-807f-5cbfce6c132b" containerName="registry-server" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.719855 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff51956-c2e9-4e25-9cd4-56bb6304b7db" containerName="tempest-tests-tempest-tests-runner" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.720502 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.722744 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mncbg" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.727699 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.838849 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"024d4dc0-8096-467a-9b96-9394a8965e48\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.839014 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcnjx\" (UniqueName: \"kubernetes.io/projected/024d4dc0-8096-467a-9b96-9394a8965e48-kube-api-access-jcnjx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"024d4dc0-8096-467a-9b96-9394a8965e48\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.941120 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"024d4dc0-8096-467a-9b96-9394a8965e48\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.941293 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcnjx\" (UniqueName: \"kubernetes.io/projected/024d4dc0-8096-467a-9b96-9394a8965e48-kube-api-access-jcnjx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"024d4dc0-8096-467a-9b96-9394a8965e48\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.941790 4802 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"024d4dc0-8096-467a-9b96-9394a8965e48\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.963671 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcnjx\" (UniqueName: \"kubernetes.io/projected/024d4dc0-8096-467a-9b96-9394a8965e48-kube-api-access-jcnjx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"024d4dc0-8096-467a-9b96-9394a8965e48\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 04 06:14:36 crc kubenswrapper[4802]: I1004 06:14:36.991685 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"024d4dc0-8096-467a-9b96-9394a8965e48\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 04 06:14:37 crc kubenswrapper[4802]: I1004 06:14:37.048215 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 04 06:14:37 crc kubenswrapper[4802]: I1004 06:14:37.517949 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 04 06:14:38 crc kubenswrapper[4802]: I1004 06:14:38.022083 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"024d4dc0-8096-467a-9b96-9394a8965e48","Type":"ContainerStarted","Data":"d6ff2a6e7248ae71089cd693eac9f99dfd575401c130c7c91e3a03d5e359973d"} Oct 04 06:14:39 crc kubenswrapper[4802]: I1004 06:14:39.035087 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"024d4dc0-8096-467a-9b96-9394a8965e48","Type":"ContainerStarted","Data":"1f72bbad0b1b06a9c2933886e1d186e13c3360f00cb5a28c013641e515927fc4"} Oct 04 06:14:39 crc kubenswrapper[4802]: I1004 06:14:39.068405 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.166477748 podStartE2EDuration="3.06835039s" podCreationTimestamp="2025-10-04 06:14:36 +0000 UTC" firstStartedPulling="2025-10-04 06:14:37.523575909 +0000 UTC m=+5319.931576534" lastFinishedPulling="2025-10-04 06:14:38.425448541 +0000 UTC m=+5320.833449176" observedRunningTime="2025-10-04 06:14:39.055556655 +0000 UTC m=+5321.463557320" watchObservedRunningTime="2025-10-04 06:14:39.06835039 +0000 UTC m=+5321.476351065" Oct 04 06:14:46 crc kubenswrapper[4802]: I1004 06:14:46.360838 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:14:46 crc kubenswrapper[4802]: E1004 06:14:46.362299 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:14:59 crc kubenswrapper[4802]: I1004 06:14:59.360766 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:14:59 crc kubenswrapper[4802]: E1004 06:14:59.361933 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.165626 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p"] Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.167061 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.169757 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.169825 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.185991 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p"] Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.314783 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-config-volume\") pod \"collect-profiles-29325975-npc8p\" (UID: \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.314950 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fv72\" (UniqueName: \"kubernetes.io/projected/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-kube-api-access-7fv72\") pod \"collect-profiles-29325975-npc8p\" (UID: \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.315125 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-secret-volume\") pod \"collect-profiles-29325975-npc8p\" (UID: \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.417453 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fv72\" (UniqueName: \"kubernetes.io/projected/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-kube-api-access-7fv72\") pod \"collect-profiles-29325975-npc8p\" (UID: \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.417597 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-secret-volume\") pod \"collect-profiles-29325975-npc8p\" (UID: \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.417845 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-config-volume\") pod \"collect-profiles-29325975-npc8p\" (UID: \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.418857 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-config-volume\") pod \"collect-profiles-29325975-npc8p\" (UID: \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.437840 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fv72\" (UniqueName: \"kubernetes.io/projected/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-kube-api-access-7fv72\") pod \"collect-profiles-29325975-npc8p\" (UID: \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.444185 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-secret-volume\") pod \"collect-profiles-29325975-npc8p\" (UID: \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.517076 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" Oct 04 06:15:00 crc kubenswrapper[4802]: I1004 06:15:00.981791 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p"] Oct 04 06:15:02 crc kubenswrapper[4802]: I1004 06:15:02.308024 4802 generic.go:334] "Generic (PLEG): container finished" podID="7ee224dc-5fed-41bb-8dfb-03dcfb35c88b" containerID="fdc033888f9445398368a2f1bb01a2434bb9b5b2e14232797161babba8f5817e" exitCode=0 Oct 04 06:15:02 crc kubenswrapper[4802]: I1004 06:15:02.308092 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" event={"ID":"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b","Type":"ContainerDied","Data":"fdc033888f9445398368a2f1bb01a2434bb9b5b2e14232797161babba8f5817e"} Oct 04 06:15:02 crc kubenswrapper[4802]: I1004 06:15:02.308376 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" event={"ID":"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b","Type":"ContainerStarted","Data":"adaaaa995bc654f7ba006a02cd41029cd4032c6f97694271b30959dc2bdcc45c"} Oct 04 06:15:03 crc kubenswrapper[4802]: I1004 06:15:03.784649 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" Oct 04 06:15:03 crc kubenswrapper[4802]: I1004 06:15:03.890462 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-config-volume\") pod \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\" (UID: \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\") " Oct 04 06:15:03 crc kubenswrapper[4802]: I1004 06:15:03.890747 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-secret-volume\") pod \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\" (UID: \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\") " Oct 04 06:15:03 crc kubenswrapper[4802]: I1004 06:15:03.890821 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fv72\" (UniqueName: \"kubernetes.io/projected/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-kube-api-access-7fv72\") pod \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\" (UID: \"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b\") " Oct 04 06:15:03 crc kubenswrapper[4802]: I1004 06:15:03.891420 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-config-volume" (OuterVolumeSpecName: "config-volume") pod "7ee224dc-5fed-41bb-8dfb-03dcfb35c88b" (UID: "7ee224dc-5fed-41bb-8dfb-03dcfb35c88b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 04 06:15:03 crc kubenswrapper[4802]: I1004 06:15:03.891860 4802 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-config-volume\") on node \"crc\" DevicePath \"\"" Oct 04 06:15:03 crc kubenswrapper[4802]: I1004 06:15:03.897480 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-kube-api-access-7fv72" (OuterVolumeSpecName: "kube-api-access-7fv72") pod "7ee224dc-5fed-41bb-8dfb-03dcfb35c88b" (UID: "7ee224dc-5fed-41bb-8dfb-03dcfb35c88b"). InnerVolumeSpecName "kube-api-access-7fv72". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:15:03 crc kubenswrapper[4802]: I1004 06:15:03.901040 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7ee224dc-5fed-41bb-8dfb-03dcfb35c88b" (UID: "7ee224dc-5fed-41bb-8dfb-03dcfb35c88b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 04 06:15:03 crc kubenswrapper[4802]: I1004 06:15:03.993327 4802 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 04 06:15:03 crc kubenswrapper[4802]: I1004 06:15:03.993369 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fv72\" (UniqueName: \"kubernetes.io/projected/7ee224dc-5fed-41bb-8dfb-03dcfb35c88b-kube-api-access-7fv72\") on node \"crc\" DevicePath \"\"" Oct 04 06:15:04 crc kubenswrapper[4802]: I1004 06:15:04.336203 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" event={"ID":"7ee224dc-5fed-41bb-8dfb-03dcfb35c88b","Type":"ContainerDied","Data":"adaaaa995bc654f7ba006a02cd41029cd4032c6f97694271b30959dc2bdcc45c"} Oct 04 06:15:04 crc kubenswrapper[4802]: I1004 06:15:04.336250 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adaaaa995bc654f7ba006a02cd41029cd4032c6f97694271b30959dc2bdcc45c" Oct 04 06:15:04 crc kubenswrapper[4802]: I1004 06:15:04.336270 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29325975-npc8p" Oct 04 06:15:04 crc kubenswrapper[4802]: I1004 06:15:04.869046 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq"] Oct 04 06:15:04 crc kubenswrapper[4802]: I1004 06:15:04.875938 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29325930-jw2hq"] Oct 04 06:15:06 crc kubenswrapper[4802]: I1004 06:15:06.373189 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a183a9b2-c2e9-489e-aabe-0ce929a2682c" path="/var/lib/kubelet/pods/a183a9b2-c2e9-489e-aabe-0ce929a2682c/volumes" Oct 04 06:15:11 crc kubenswrapper[4802]: I1004 06:15:11.361164 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:15:11 crc kubenswrapper[4802]: E1004 06:15:11.362880 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:15:16 crc kubenswrapper[4802]: I1004 06:15:16.143618 4802 scope.go:117] "RemoveContainer" containerID="ac67e2d89ae3a27e7833d6fc9a12df2694bb6a45315c80fda81e6f80a34ab7b3" Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.381236 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gxgzv/must-gather-mzj8b"] Oct 04 06:15:23 crc kubenswrapper[4802]: E1004 06:15:23.382248 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee224dc-5fed-41bb-8dfb-03dcfb35c88b" containerName="collect-profiles" Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.382265 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee224dc-5fed-41bb-8dfb-03dcfb35c88b" containerName="collect-profiles" Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.382521 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee224dc-5fed-41bb-8dfb-03dcfb35c88b" containerName="collect-profiles" Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.383762 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/must-gather-mzj8b" Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.387598 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gxgzv"/"openshift-service-ca.crt" Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.389489 4802 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gxgzv"/"kube-root-ca.crt" Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.394391 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gxgzv/must-gather-mzj8b"] Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.494880 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2mqs\" (UniqueName: \"kubernetes.io/projected/f05464ee-e487-453b-bc52-76e8eae65f4e-kube-api-access-q2mqs\") pod \"must-gather-mzj8b\" (UID: \"f05464ee-e487-453b-bc52-76e8eae65f4e\") " pod="openshift-must-gather-gxgzv/must-gather-mzj8b" Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.494968 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f05464ee-e487-453b-bc52-76e8eae65f4e-must-gather-output\") pod \"must-gather-mzj8b\" (UID: \"f05464ee-e487-453b-bc52-76e8eae65f4e\") " pod="openshift-must-gather-gxgzv/must-gather-mzj8b" Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.596406 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2mqs\" (UniqueName: \"kubernetes.io/projected/f05464ee-e487-453b-bc52-76e8eae65f4e-kube-api-access-q2mqs\") pod \"must-gather-mzj8b\" (UID: \"f05464ee-e487-453b-bc52-76e8eae65f4e\") " pod="openshift-must-gather-gxgzv/must-gather-mzj8b" Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.596486 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f05464ee-e487-453b-bc52-76e8eae65f4e-must-gather-output\") pod \"must-gather-mzj8b\" (UID: \"f05464ee-e487-453b-bc52-76e8eae65f4e\") " pod="openshift-must-gather-gxgzv/must-gather-mzj8b" Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.597022 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f05464ee-e487-453b-bc52-76e8eae65f4e-must-gather-output\") pod \"must-gather-mzj8b\" (UID: \"f05464ee-e487-453b-bc52-76e8eae65f4e\") " pod="openshift-must-gather-gxgzv/must-gather-mzj8b" Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.615509 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2mqs\" (UniqueName: \"kubernetes.io/projected/f05464ee-e487-453b-bc52-76e8eae65f4e-kube-api-access-q2mqs\") pod \"must-gather-mzj8b\" (UID: \"f05464ee-e487-453b-bc52-76e8eae65f4e\") " pod="openshift-must-gather-gxgzv/must-gather-mzj8b" Oct 04 06:15:23 crc kubenswrapper[4802]: I1004 06:15:23.702809 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/must-gather-mzj8b" Oct 04 06:15:24 crc kubenswrapper[4802]: I1004 06:15:24.158685 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gxgzv/must-gather-mzj8b"] Oct 04 06:15:24 crc kubenswrapper[4802]: I1004 06:15:24.389027 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:15:24 crc kubenswrapper[4802]: E1004 06:15:24.394775 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:15:24 crc kubenswrapper[4802]: I1004 06:15:24.573871 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gxgzv/must-gather-mzj8b" event={"ID":"f05464ee-e487-453b-bc52-76e8eae65f4e","Type":"ContainerStarted","Data":"36767793e8049936af1b0b443870c4ae7bd736ec18d4cbd6c50c5b5ac631cbbe"} Oct 04 06:15:29 crc kubenswrapper[4802]: I1004 06:15:29.663450 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gxgzv/must-gather-mzj8b" event={"ID":"f05464ee-e487-453b-bc52-76e8eae65f4e","Type":"ContainerStarted","Data":"52c4b2a00cc4b43af43698ea23c699f2eb721ab67dac6b2d07d79af508c75d8d"} Oct 04 06:15:29 crc kubenswrapper[4802]: I1004 06:15:29.664209 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gxgzv/must-gather-mzj8b" event={"ID":"f05464ee-e487-453b-bc52-76e8eae65f4e","Type":"ContainerStarted","Data":"6e4e042e56e306a9451742a4212c8433a988617ffaeb7894a06a575e812692a5"} Oct 04 06:15:29 crc kubenswrapper[4802]: I1004 06:15:29.686275 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gxgzv/must-gather-mzj8b" podStartSLOduration=2.164170663 podStartE2EDuration="6.686253017s" podCreationTimestamp="2025-10-04 06:15:23 +0000 UTC" firstStartedPulling="2025-10-04 06:15:24.170924156 +0000 UTC m=+5366.578924791" lastFinishedPulling="2025-10-04 06:15:28.6930065 +0000 UTC m=+5371.101007145" observedRunningTime="2025-10-04 06:15:29.677085175 +0000 UTC m=+5372.085085810" watchObservedRunningTime="2025-10-04 06:15:29.686253017 +0000 UTC m=+5372.094253652" Oct 04 06:15:33 crc kubenswrapper[4802]: I1004 06:15:33.273911 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gxgzv/crc-debug-vtg6z"] Oct 04 06:15:33 crc kubenswrapper[4802]: I1004 06:15:33.277829 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" Oct 04 06:15:33 crc kubenswrapper[4802]: I1004 06:15:33.280515 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gxgzv"/"default-dockercfg-jsjhd" Oct 04 06:15:33 crc kubenswrapper[4802]: I1004 06:15:33.360808 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m69lm\" (UniqueName: \"kubernetes.io/projected/f209b943-24a3-4b9d-b770-68938ad79d24-kube-api-access-m69lm\") pod \"crc-debug-vtg6z\" (UID: \"f209b943-24a3-4b9d-b770-68938ad79d24\") " pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" Oct 04 06:15:33 crc kubenswrapper[4802]: I1004 06:15:33.360880 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f209b943-24a3-4b9d-b770-68938ad79d24-host\") pod \"crc-debug-vtg6z\" (UID: \"f209b943-24a3-4b9d-b770-68938ad79d24\") " pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" Oct 04 06:15:33 crc kubenswrapper[4802]: I1004 06:15:33.462415 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m69lm\" (UniqueName: \"kubernetes.io/projected/f209b943-24a3-4b9d-b770-68938ad79d24-kube-api-access-m69lm\") pod \"crc-debug-vtg6z\" (UID: \"f209b943-24a3-4b9d-b770-68938ad79d24\") " pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" Oct 04 06:15:33 crc kubenswrapper[4802]: I1004 06:15:33.462482 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f209b943-24a3-4b9d-b770-68938ad79d24-host\") pod \"crc-debug-vtg6z\" (UID: \"f209b943-24a3-4b9d-b770-68938ad79d24\") " pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" Oct 04 06:15:33 crc kubenswrapper[4802]: I1004 06:15:33.462576 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f209b943-24a3-4b9d-b770-68938ad79d24-host\") pod \"crc-debug-vtg6z\" (UID: \"f209b943-24a3-4b9d-b770-68938ad79d24\") " pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" Oct 04 06:15:33 crc kubenswrapper[4802]: I1004 06:15:33.481601 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m69lm\" (UniqueName: \"kubernetes.io/projected/f209b943-24a3-4b9d-b770-68938ad79d24-kube-api-access-m69lm\") pod \"crc-debug-vtg6z\" (UID: \"f209b943-24a3-4b9d-b770-68938ad79d24\") " pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" Oct 04 06:15:33 crc kubenswrapper[4802]: I1004 06:15:33.593862 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" Oct 04 06:15:33 crc kubenswrapper[4802]: W1004 06:15:33.637959 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf209b943_24a3_4b9d_b770_68938ad79d24.slice/crio-09ec462ad4a53d6372a0c329586378b7b872e8b4ad5a6b340fadcfb17ac62b5e WatchSource:0}: Error finding container 09ec462ad4a53d6372a0c329586378b7b872e8b4ad5a6b340fadcfb17ac62b5e: Status 404 returned error can't find the container with id 09ec462ad4a53d6372a0c329586378b7b872e8b4ad5a6b340fadcfb17ac62b5e Oct 04 06:15:33 crc kubenswrapper[4802]: I1004 06:15:33.705268 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" event={"ID":"f209b943-24a3-4b9d-b770-68938ad79d24","Type":"ContainerStarted","Data":"09ec462ad4a53d6372a0c329586378b7b872e8b4ad5a6b340fadcfb17ac62b5e"} Oct 04 06:15:36 crc kubenswrapper[4802]: I1004 06:15:36.363127 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:15:36 crc kubenswrapper[4802]: E1004 06:15:36.364574 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:15:44 crc kubenswrapper[4802]: I1004 06:15:44.813524 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" event={"ID":"f209b943-24a3-4b9d-b770-68938ad79d24","Type":"ContainerStarted","Data":"44f137cb949cc99e75f4d43ab331c0509a54157ab0b4af33f73dc25fd2b57a1e"} Oct 04 06:15:44 crc kubenswrapper[4802]: I1004 06:15:44.828484 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" podStartSLOduration=1.8715823980000001 podStartE2EDuration="11.828467507s" podCreationTimestamp="2025-10-04 06:15:33 +0000 UTC" firstStartedPulling="2025-10-04 06:15:33.640375057 +0000 UTC m=+5376.048375692" lastFinishedPulling="2025-10-04 06:15:43.597260176 +0000 UTC m=+5386.005260801" observedRunningTime="2025-10-04 06:15:44.826576103 +0000 UTC m=+5387.234576728" watchObservedRunningTime="2025-10-04 06:15:44.828467507 +0000 UTC m=+5387.236468132" Oct 04 06:15:50 crc kubenswrapper[4802]: I1004 06:15:50.360178 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:15:50 crc kubenswrapper[4802]: E1004 06:15:50.361242 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:16:02 crc kubenswrapper[4802]: I1004 06:16:02.359520 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:16:02 crc kubenswrapper[4802]: E1004 06:16:02.360296 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.098740 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4pwzw"] Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.102859 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.113737 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ead3935-6ea3-4d65-9438-f71f8f05b175-catalog-content\") pod \"redhat-marketplace-4pwzw\" (UID: \"2ead3935-6ea3-4d65-9438-f71f8f05b175\") " pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.113826 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl2gl\" (UniqueName: \"kubernetes.io/projected/2ead3935-6ea3-4d65-9438-f71f8f05b175-kube-api-access-rl2gl\") pod \"redhat-marketplace-4pwzw\" (UID: \"2ead3935-6ea3-4d65-9438-f71f8f05b175\") " pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.114057 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ead3935-6ea3-4d65-9438-f71f8f05b175-utilities\") pod \"redhat-marketplace-4pwzw\" (UID: \"2ead3935-6ea3-4d65-9438-f71f8f05b175\") " pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.116256 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pwzw"] Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.216325 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ead3935-6ea3-4d65-9438-f71f8f05b175-utilities\") pod \"redhat-marketplace-4pwzw\" (UID: \"2ead3935-6ea3-4d65-9438-f71f8f05b175\") " pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.216583 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ead3935-6ea3-4d65-9438-f71f8f05b175-catalog-content\") pod \"redhat-marketplace-4pwzw\" (UID: \"2ead3935-6ea3-4d65-9438-f71f8f05b175\") " pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.216638 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl2gl\" (UniqueName: \"kubernetes.io/projected/2ead3935-6ea3-4d65-9438-f71f8f05b175-kube-api-access-rl2gl\") pod \"redhat-marketplace-4pwzw\" (UID: \"2ead3935-6ea3-4d65-9438-f71f8f05b175\") " pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.217225 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ead3935-6ea3-4d65-9438-f71f8f05b175-catalog-content\") pod \"redhat-marketplace-4pwzw\" (UID: \"2ead3935-6ea3-4d65-9438-f71f8f05b175\") " pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.217294 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ead3935-6ea3-4d65-9438-f71f8f05b175-utilities\") pod \"redhat-marketplace-4pwzw\" (UID: \"2ead3935-6ea3-4d65-9438-f71f8f05b175\") " pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.265185 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl2gl\" (UniqueName: \"kubernetes.io/projected/2ead3935-6ea3-4d65-9438-f71f8f05b175-kube-api-access-rl2gl\") pod \"redhat-marketplace-4pwzw\" (UID: \"2ead3935-6ea3-4d65-9438-f71f8f05b175\") " pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.455458 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:08 crc kubenswrapper[4802]: I1004 06:16:08.950833 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pwzw"] Oct 04 06:16:09 crc kubenswrapper[4802]: I1004 06:16:09.054317 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pwzw" event={"ID":"2ead3935-6ea3-4d65-9438-f71f8f05b175","Type":"ContainerStarted","Data":"7b633884b6a6d60ee1bb46eeca1e474d9cd78e6082d55c3cbbb10cb9c453cb05"} Oct 04 06:16:10 crc kubenswrapper[4802]: I1004 06:16:10.070301 4802 generic.go:334] "Generic (PLEG): container finished" podID="2ead3935-6ea3-4d65-9438-f71f8f05b175" containerID="39b51d66ed0c6f348f0c4fc5abe82764a756710bc9a301c0324c6565d99d816b" exitCode=0 Oct 04 06:16:10 crc kubenswrapper[4802]: I1004 06:16:10.070417 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pwzw" event={"ID":"2ead3935-6ea3-4d65-9438-f71f8f05b175","Type":"ContainerDied","Data":"39b51d66ed0c6f348f0c4fc5abe82764a756710bc9a301c0324c6565d99d816b"} Oct 04 06:16:11 crc kubenswrapper[4802]: I1004 06:16:11.081092 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pwzw" event={"ID":"2ead3935-6ea3-4d65-9438-f71f8f05b175","Type":"ContainerStarted","Data":"f4f17ac21b6bcb1bdcfca9a92619cef5998bd0c5c0b7d7bb9925514d88211207"} Oct 04 06:16:12 crc kubenswrapper[4802]: I1004 06:16:12.094946 4802 generic.go:334] "Generic (PLEG): container finished" podID="2ead3935-6ea3-4d65-9438-f71f8f05b175" containerID="f4f17ac21b6bcb1bdcfca9a92619cef5998bd0c5c0b7d7bb9925514d88211207" exitCode=0 Oct 04 06:16:12 crc kubenswrapper[4802]: I1004 06:16:12.096802 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pwzw" event={"ID":"2ead3935-6ea3-4d65-9438-f71f8f05b175","Type":"ContainerDied","Data":"f4f17ac21b6bcb1bdcfca9a92619cef5998bd0c5c0b7d7bb9925514d88211207"} Oct 04 06:16:14 crc kubenswrapper[4802]: I1004 06:16:14.359798 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:16:14 crc kubenswrapper[4802]: E1004 06:16:14.360656 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:16:15 crc kubenswrapper[4802]: I1004 06:16:15.122257 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pwzw" event={"ID":"2ead3935-6ea3-4d65-9438-f71f8f05b175","Type":"ContainerStarted","Data":"b588bb02e09fe0372f809413b6d1089f2ad1add6db173b70a4b4cb6c86b12a80"} Oct 04 06:16:15 crc kubenswrapper[4802]: I1004 06:16:15.138378 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4pwzw" podStartSLOduration=3.2344162 podStartE2EDuration="7.138361051s" podCreationTimestamp="2025-10-04 06:16:08 +0000 UTC" firstStartedPulling="2025-10-04 06:16:10.08181047 +0000 UTC m=+5412.489811135" lastFinishedPulling="2025-10-04 06:16:13.985755361 +0000 UTC m=+5416.393755986" observedRunningTime="2025-10-04 06:16:15.13517429 +0000 UTC m=+5417.543174915" watchObservedRunningTime="2025-10-04 06:16:15.138361051 +0000 UTC m=+5417.546361676" Oct 04 06:16:18 crc kubenswrapper[4802]: I1004 06:16:18.457149 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:18 crc kubenswrapper[4802]: I1004 06:16:18.457689 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:18 crc kubenswrapper[4802]: I1004 06:16:18.530885 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:19 crc kubenswrapper[4802]: I1004 06:16:19.211188 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:19 crc kubenswrapper[4802]: I1004 06:16:19.257079 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pwzw"] Oct 04 06:16:21 crc kubenswrapper[4802]: I1004 06:16:21.173096 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4pwzw" podUID="2ead3935-6ea3-4d65-9438-f71f8f05b175" containerName="registry-server" containerID="cri-o://b588bb02e09fe0372f809413b6d1089f2ad1add6db173b70a4b4cb6c86b12a80" gracePeriod=2 Oct 04 06:16:21 crc kubenswrapper[4802]: I1004 06:16:21.601727 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:21 crc kubenswrapper[4802]: I1004 06:16:21.720300 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ead3935-6ea3-4d65-9438-f71f8f05b175-catalog-content\") pod \"2ead3935-6ea3-4d65-9438-f71f8f05b175\" (UID: \"2ead3935-6ea3-4d65-9438-f71f8f05b175\") " Oct 04 06:16:21 crc kubenswrapper[4802]: I1004 06:16:21.720420 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ead3935-6ea3-4d65-9438-f71f8f05b175-utilities\") pod \"2ead3935-6ea3-4d65-9438-f71f8f05b175\" (UID: \"2ead3935-6ea3-4d65-9438-f71f8f05b175\") " Oct 04 06:16:21 crc kubenswrapper[4802]: I1004 06:16:21.720501 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl2gl\" (UniqueName: \"kubernetes.io/projected/2ead3935-6ea3-4d65-9438-f71f8f05b175-kube-api-access-rl2gl\") pod \"2ead3935-6ea3-4d65-9438-f71f8f05b175\" (UID: \"2ead3935-6ea3-4d65-9438-f71f8f05b175\") " Oct 04 06:16:21 crc kubenswrapper[4802]: I1004 06:16:21.721270 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ead3935-6ea3-4d65-9438-f71f8f05b175-utilities" (OuterVolumeSpecName: "utilities") pod "2ead3935-6ea3-4d65-9438-f71f8f05b175" (UID: "2ead3935-6ea3-4d65-9438-f71f8f05b175"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:16:21 crc kubenswrapper[4802]: I1004 06:16:21.725860 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ead3935-6ea3-4d65-9438-f71f8f05b175-kube-api-access-rl2gl" (OuterVolumeSpecName: "kube-api-access-rl2gl") pod "2ead3935-6ea3-4d65-9438-f71f8f05b175" (UID: "2ead3935-6ea3-4d65-9438-f71f8f05b175"). InnerVolumeSpecName "kube-api-access-rl2gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:16:21 crc kubenswrapper[4802]: I1004 06:16:21.731920 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ead3935-6ea3-4d65-9438-f71f8f05b175-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ead3935-6ea3-4d65-9438-f71f8f05b175" (UID: "2ead3935-6ea3-4d65-9438-f71f8f05b175"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:16:21 crc kubenswrapper[4802]: I1004 06:16:21.822993 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ead3935-6ea3-4d65-9438-f71f8f05b175-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 06:16:21 crc kubenswrapper[4802]: I1004 06:16:21.823144 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ead3935-6ea3-4d65-9438-f71f8f05b175-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 06:16:21 crc kubenswrapper[4802]: I1004 06:16:21.823154 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl2gl\" (UniqueName: \"kubernetes.io/projected/2ead3935-6ea3-4d65-9438-f71f8f05b175-kube-api-access-rl2gl\") on node \"crc\" DevicePath \"\"" Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.185277 4802 generic.go:334] "Generic (PLEG): container finished" podID="2ead3935-6ea3-4d65-9438-f71f8f05b175" containerID="b588bb02e09fe0372f809413b6d1089f2ad1add6db173b70a4b4cb6c86b12a80" exitCode=0 Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.185325 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pwzw" event={"ID":"2ead3935-6ea3-4d65-9438-f71f8f05b175","Type":"ContainerDied","Data":"b588bb02e09fe0372f809413b6d1089f2ad1add6db173b70a4b4cb6c86b12a80"} Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.185342 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pwzw" Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.185368 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pwzw" event={"ID":"2ead3935-6ea3-4d65-9438-f71f8f05b175","Type":"ContainerDied","Data":"7b633884b6a6d60ee1bb46eeca1e474d9cd78e6082d55c3cbbb10cb9c453cb05"} Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.185391 4802 scope.go:117] "RemoveContainer" containerID="b588bb02e09fe0372f809413b6d1089f2ad1add6db173b70a4b4cb6c86b12a80" Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.205892 4802 scope.go:117] "RemoveContainer" containerID="f4f17ac21b6bcb1bdcfca9a92619cef5998bd0c5c0b7d7bb9925514d88211207" Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.223961 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pwzw"] Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.233501 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pwzw"] Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.234711 4802 scope.go:117] "RemoveContainer" containerID="39b51d66ed0c6f348f0c4fc5abe82764a756710bc9a301c0324c6565d99d816b" Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.281411 4802 scope.go:117] "RemoveContainer" containerID="b588bb02e09fe0372f809413b6d1089f2ad1add6db173b70a4b4cb6c86b12a80" Oct 04 06:16:22 crc kubenswrapper[4802]: E1004 06:16:22.281834 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b588bb02e09fe0372f809413b6d1089f2ad1add6db173b70a4b4cb6c86b12a80\": container with ID starting with b588bb02e09fe0372f809413b6d1089f2ad1add6db173b70a4b4cb6c86b12a80 not found: ID does not exist" containerID="b588bb02e09fe0372f809413b6d1089f2ad1add6db173b70a4b4cb6c86b12a80" Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.281861 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b588bb02e09fe0372f809413b6d1089f2ad1add6db173b70a4b4cb6c86b12a80"} err="failed to get container status \"b588bb02e09fe0372f809413b6d1089f2ad1add6db173b70a4b4cb6c86b12a80\": rpc error: code = NotFound desc = could not find container \"b588bb02e09fe0372f809413b6d1089f2ad1add6db173b70a4b4cb6c86b12a80\": container with ID starting with b588bb02e09fe0372f809413b6d1089f2ad1add6db173b70a4b4cb6c86b12a80 not found: ID does not exist" Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.281881 4802 scope.go:117] "RemoveContainer" containerID="f4f17ac21b6bcb1bdcfca9a92619cef5998bd0c5c0b7d7bb9925514d88211207" Oct 04 06:16:22 crc kubenswrapper[4802]: E1004 06:16:22.282151 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f17ac21b6bcb1bdcfca9a92619cef5998bd0c5c0b7d7bb9925514d88211207\": container with ID starting with f4f17ac21b6bcb1bdcfca9a92619cef5998bd0c5c0b7d7bb9925514d88211207 not found: ID does not exist" containerID="f4f17ac21b6bcb1bdcfca9a92619cef5998bd0c5c0b7d7bb9925514d88211207" Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.282168 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f17ac21b6bcb1bdcfca9a92619cef5998bd0c5c0b7d7bb9925514d88211207"} err="failed to get container status \"f4f17ac21b6bcb1bdcfca9a92619cef5998bd0c5c0b7d7bb9925514d88211207\": rpc error: code = NotFound desc = could not find container \"f4f17ac21b6bcb1bdcfca9a92619cef5998bd0c5c0b7d7bb9925514d88211207\": container with ID starting with f4f17ac21b6bcb1bdcfca9a92619cef5998bd0c5c0b7d7bb9925514d88211207 not found: ID does not exist" Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.282183 4802 scope.go:117] "RemoveContainer" containerID="39b51d66ed0c6f348f0c4fc5abe82764a756710bc9a301c0324c6565d99d816b" Oct 04 06:16:22 crc kubenswrapper[4802]: E1004 06:16:22.282421 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39b51d66ed0c6f348f0c4fc5abe82764a756710bc9a301c0324c6565d99d816b\": container with ID starting with 39b51d66ed0c6f348f0c4fc5abe82764a756710bc9a301c0324c6565d99d816b not found: ID does not exist" containerID="39b51d66ed0c6f348f0c4fc5abe82764a756710bc9a301c0324c6565d99d816b" Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.282464 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39b51d66ed0c6f348f0c4fc5abe82764a756710bc9a301c0324c6565d99d816b"} err="failed to get container status \"39b51d66ed0c6f348f0c4fc5abe82764a756710bc9a301c0324c6565d99d816b\": rpc error: code = NotFound desc = could not find container \"39b51d66ed0c6f348f0c4fc5abe82764a756710bc9a301c0324c6565d99d816b\": container with ID starting with 39b51d66ed0c6f348f0c4fc5abe82764a756710bc9a301c0324c6565d99d816b not found: ID does not exist" Oct 04 06:16:22 crc kubenswrapper[4802]: I1004 06:16:22.368140 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ead3935-6ea3-4d65-9438-f71f8f05b175" path="/var/lib/kubelet/pods/2ead3935-6ea3-4d65-9438-f71f8f05b175/volumes" Oct 04 06:16:29 crc kubenswrapper[4802]: I1004 06:16:29.359909 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:16:29 crc kubenswrapper[4802]: E1004 06:16:29.360749 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:16:41 crc kubenswrapper[4802]: I1004 06:16:41.359571 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:16:41 crc kubenswrapper[4802]: E1004 06:16:41.362099 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:16:41 crc kubenswrapper[4802]: I1004 06:16:41.415095 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5d4c9d8df8-pp97l_18615f98-63ad-48ee-83c3-1caeee1be993/barbican-api/0.log" Oct 04 06:16:41 crc kubenswrapper[4802]: I1004 06:16:41.683594 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5d4c9d8df8-pp97l_18615f98-63ad-48ee-83c3-1caeee1be993/barbican-api-log/0.log" Oct 04 06:16:41 crc kubenswrapper[4802]: I1004 06:16:41.824420 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-88b556fb6-qrcjc_e89ef05b-4ac0-495f-886e-ab8d4c37195d/barbican-keystone-listener/0.log" Oct 04 06:16:42 crc kubenswrapper[4802]: I1004 06:16:42.052762 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-88b556fb6-qrcjc_e89ef05b-4ac0-495f-886e-ab8d4c37195d/barbican-keystone-listener-log/0.log" Oct 04 06:16:42 crc kubenswrapper[4802]: I1004 06:16:42.217067 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-f55c6f5ff-whrkn_bdbd3c17-f18e-4a9d-a65e-1d199459c0f3/barbican-worker/0.log" Oct 04 06:16:42 crc kubenswrapper[4802]: I1004 06:16:42.233538 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-f55c6f5ff-whrkn_bdbd3c17-f18e-4a9d-a65e-1d199459c0f3/barbican-worker-log/0.log" Oct 04 06:16:42 crc kubenswrapper[4802]: I1004 06:16:42.464745 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-zgkb6_c902dd3b-da2a-4755-8f50-b3e93d33630f/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:42 crc kubenswrapper[4802]: I1004 06:16:42.701897 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f81d6574-95c4-4583-893a-87f8a22d6162/ceilometer-central-agent/0.log" Oct 04 06:16:42 crc kubenswrapper[4802]: I1004 06:16:42.871018 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f81d6574-95c4-4583-893a-87f8a22d6162/proxy-httpd/0.log" Oct 04 06:16:42 crc kubenswrapper[4802]: I1004 06:16:42.909550 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f81d6574-95c4-4583-893a-87f8a22d6162/ceilometer-notification-agent/0.log" Oct 04 06:16:43 crc kubenswrapper[4802]: I1004 06:16:43.034205 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f81d6574-95c4-4583-893a-87f8a22d6162/sg-core/0.log" Oct 04 06:16:43 crc kubenswrapper[4802]: I1004 06:16:43.229144 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-5ppf8_37f49664-6da8-4406-8f02-db640b6bcbd1/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:43 crc kubenswrapper[4802]: I1004 06:16:43.462327 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mqwqh_852da720-96fe-413d-8126-89ebf6f859ea/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:44 crc kubenswrapper[4802]: I1004 06:16:44.378758 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9751f30f-b58e-4f5e-9990-e63ee092a495/cinder-api/0.log" Oct 04 06:16:44 crc kubenswrapper[4802]: I1004 06:16:44.466804 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9751f30f-b58e-4f5e-9990-e63ee092a495/cinder-api-log/0.log" Oct 04 06:16:44 crc kubenswrapper[4802]: I1004 06:16:44.581394 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_7b5d6eca-76be-4473-8c62-b92cd50ba646/probe/0.log" Oct 04 06:16:44 crc kubenswrapper[4802]: I1004 06:16:44.844264 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_03367bc3-3554-4b43-8215-070e0d9d8c13/cinder-scheduler/0.log" Oct 04 06:16:44 crc kubenswrapper[4802]: I1004 06:16:44.974131 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_03367bc3-3554-4b43-8215-070e0d9d8c13/probe/0.log" Oct 04 06:16:45 crc kubenswrapper[4802]: I1004 06:16:45.027354 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_7b5d6eca-76be-4473-8c62-b92cd50ba646/cinder-backup/0.log" Oct 04 06:16:45 crc kubenswrapper[4802]: I1004 06:16:45.231532 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_cf0554fa-2bf2-45d4-a620-7445764b693d/probe/0.log" Oct 04 06:16:45 crc kubenswrapper[4802]: I1004 06:16:45.564742 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-245d6_b9ca1c91-cdae-401a-aa21-c5326e8afdb6/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:45 crc kubenswrapper[4802]: I1004 06:16:45.760208 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-npkj2_f08fb8cd-38aa-4cde-9321-43ae01965484/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:45 crc kubenswrapper[4802]: I1004 06:16:45.988175 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-9fvn9_cee4419e-b442-4f88-8502-dbdafe82e436/init/0.log" Oct 04 06:16:46 crc kubenswrapper[4802]: I1004 06:16:46.173334 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-9fvn9_cee4419e-b442-4f88-8502-dbdafe82e436/init/0.log" Oct 04 06:16:46 crc kubenswrapper[4802]: I1004 06:16:46.292068 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-9fvn9_cee4419e-b442-4f88-8502-dbdafe82e436/dnsmasq-dns/0.log" Oct 04 06:16:46 crc kubenswrapper[4802]: I1004 06:16:46.557487 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c3b55104-4245-45f1-91ef-c1b4fd6682e4/glance-httpd/0.log" Oct 04 06:16:46 crc kubenswrapper[4802]: I1004 06:16:46.616793 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c3b55104-4245-45f1-91ef-c1b4fd6682e4/glance-log/0.log" Oct 04 06:16:46 crc kubenswrapper[4802]: I1004 06:16:46.870549 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_06eadbf9-f32f-4702-9ec7-16ee44f3022e/glance-httpd/0.log" Oct 04 06:16:47 crc kubenswrapper[4802]: I1004 06:16:47.012253 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_06eadbf9-f32f-4702-9ec7-16ee44f3022e/glance-log/0.log" Oct 04 06:16:47 crc kubenswrapper[4802]: I1004 06:16:47.357982 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-58d88cc67b-v6jgr_c9cb164b-15ee-488d-ae7b-cc74da075072/horizon/0.log" Oct 04 06:16:47 crc kubenswrapper[4802]: I1004 06:16:47.734307 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-58d88cc67b-v6jgr_c9cb164b-15ee-488d-ae7b-cc74da075072/horizon-log/0.log" Oct 04 06:16:47 crc kubenswrapper[4802]: I1004 06:16:47.748611 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-cwhjw_0448f188-480e-42a0-9b37-e06b990c17bd/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:47 crc kubenswrapper[4802]: I1004 06:16:47.959010 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-7c9sw_aa515445-486c-4d0f-94ac-2f0bb785120f/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:48 crc kubenswrapper[4802]: I1004 06:16:48.162609 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29325961-9qgmt_b73fd896-44ec-4db0-b095-86311908fc72/keystone-cron/0.log" Oct 04 06:16:48 crc kubenswrapper[4802]: I1004 06:16:48.754736 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_92739001-1a5b-465c-a6f7-728d00aeadfd/kube-state-metrics/0.log" Oct 04 06:16:49 crc kubenswrapper[4802]: I1004 06:16:49.180381 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-f8c67fc4f-cnjpc_bb5643ab-5cdd-42fa-b96a-180d3137816d/keystone-api/0.log" Oct 04 06:16:49 crc kubenswrapper[4802]: I1004 06:16:49.239198 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-42d2p_365ae152-a4d6-4ecd-b8c6-ea3d110ebcde/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:49 crc kubenswrapper[4802]: I1004 06:16:49.600667 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_de69abcf-8596-4354-8150-6469791192cd/manila-api/0.log" Oct 04 06:16:49 crc kubenswrapper[4802]: I1004 06:16:49.656243 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_de69abcf-8596-4354-8150-6469791192cd/manila-api-log/0.log" Oct 04 06:16:50 crc kubenswrapper[4802]: I1004 06:16:50.016286 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_0452628c-712f-42ea-877a-39363e757b7f/manila-scheduler/0.log" Oct 04 06:16:50 crc kubenswrapper[4802]: I1004 06:16:50.056890 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_0452628c-712f-42ea-877a-39363e757b7f/probe/0.log" Oct 04 06:16:50 crc kubenswrapper[4802]: I1004 06:16:50.286097 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_779c01dc-3f55-4f94-9f1d-b78f6aa256f3/manila-share/0.log" Oct 04 06:16:50 crc kubenswrapper[4802]: I1004 06:16:50.458511 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_779c01dc-3f55-4f94-9f1d-b78f6aa256f3/probe/0.log" Oct 04 06:16:51 crc kubenswrapper[4802]: I1004 06:16:51.343220 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77c7dfb8d9-7pqjl_220afca5-a3fb-496f-94b8-9f0123f0393f/neutron-api/0.log" Oct 04 06:16:51 crc kubenswrapper[4802]: I1004 06:16:51.459901 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_cf0554fa-2bf2-45d4-a620-7445764b693d/cinder-volume/0.log" Oct 04 06:16:51 crc kubenswrapper[4802]: I1004 06:16:51.541730 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ca30c560-397d-48e1-8aa6-cf27e47b055d/memcached/0.log" Oct 04 06:16:51 crc kubenswrapper[4802]: I1004 06:16:51.628335 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77c7dfb8d9-7pqjl_220afca5-a3fb-496f-94b8-9f0123f0393f/neutron-httpd/0.log" Oct 04 06:16:51 crc kubenswrapper[4802]: I1004 06:16:51.729358 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-b5mgz_e1c28ae5-e04b-489e-96fe-aab0c804d5b4/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:52 crc kubenswrapper[4802]: I1004 06:16:52.412662 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3ecc5f0f-85cb-4fc7-b243-d81502fd473d/nova-api-log/0.log" Oct 04 06:16:52 crc kubenswrapper[4802]: I1004 06:16:52.490294 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_76c6a991-4c66-48fc-b4a4-8da5d55279c7/nova-cell0-conductor-conductor/0.log" Oct 04 06:16:52 crc kubenswrapper[4802]: I1004 06:16:52.750569 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3ecc5f0f-85cb-4fc7-b243-d81502fd473d/nova-api-api/0.log" Oct 04 06:16:52 crc kubenswrapper[4802]: I1004 06:16:52.779490 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_98fc65e1-5e5c-42c7-8902-34e1aa56519e/nova-cell1-conductor-conductor/0.log" Oct 04 06:16:53 crc kubenswrapper[4802]: I1004 06:16:53.007885 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_dac2cbfb-9cef-4319-92db-1c352393b407/nova-cell1-novncproxy-novncproxy/0.log" Oct 04 06:16:53 crc kubenswrapper[4802]: I1004 06:16:53.059249 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xbrfm_5c113e22-c317-4882-9403-6bdc543e9775/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:53 crc kubenswrapper[4802]: I1004 06:16:53.258206 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_05062b6a-0940-429e-abb8-b7108f6a9e9e/nova-metadata-log/0.log" Oct 04 06:16:53 crc kubenswrapper[4802]: I1004 06:16:53.610044 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_baf7828c-edbb-4a9c-aadf-f52ecea9097e/nova-scheduler-scheduler/0.log" Oct 04 06:16:53 crc kubenswrapper[4802]: I1004 06:16:53.843686 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f6d12b3-4eff-47a3-986d-c51e9425f64f/mysql-bootstrap/0.log" Oct 04 06:16:54 crc kubenswrapper[4802]: I1004 06:16:54.020374 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f6d12b3-4eff-47a3-986d-c51e9425f64f/mysql-bootstrap/0.log" Oct 04 06:16:54 crc kubenswrapper[4802]: I1004 06:16:54.064279 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f6d12b3-4eff-47a3-986d-c51e9425f64f/galera/0.log" Oct 04 06:16:54 crc kubenswrapper[4802]: I1004 06:16:54.255511 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e6167c18-4dec-48d9-bd81-1a6b6b9e6488/mysql-bootstrap/0.log" Oct 04 06:16:54 crc kubenswrapper[4802]: I1004 06:16:54.359660 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:16:54 crc kubenswrapper[4802]: E1004 06:16:54.360074 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:16:54 crc kubenswrapper[4802]: I1004 06:16:54.403483 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e6167c18-4dec-48d9-bd81-1a6b6b9e6488/mysql-bootstrap/0.log" Oct 04 06:16:54 crc kubenswrapper[4802]: I1004 06:16:54.496083 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e6167c18-4dec-48d9-bd81-1a6b6b9e6488/galera/0.log" Oct 04 06:16:54 crc kubenswrapper[4802]: I1004 06:16:54.686969 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_05062b6a-0940-429e-abb8-b7108f6a9e9e/nova-metadata-metadata/0.log" Oct 04 06:16:54 crc kubenswrapper[4802]: I1004 06:16:54.710031 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f26f2ddf-5813-4d3b-aa54-302bba18586f/openstackclient/0.log" Oct 04 06:16:54 crc kubenswrapper[4802]: I1004 06:16:54.869004 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-75hqf_f41ad0a7-949f-48d9-9871-0ce5c64e8e13/ovn-controller/0.log" Oct 04 06:16:55 crc kubenswrapper[4802]: I1004 06:16:55.034748 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qkdm6_395c3b28-d30d-4457-bc17-3f88298e11a0/ovsdb-server-init/0.log" Oct 04 06:16:55 crc kubenswrapper[4802]: I1004 06:16:55.064084 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wdhwb_61c07999-4012-414b-8762-b64f9ed38503/openstack-network-exporter/0.log" Oct 04 06:16:55 crc kubenswrapper[4802]: I1004 06:16:55.264503 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qkdm6_395c3b28-d30d-4457-bc17-3f88298e11a0/ovsdb-server-init/0.log" Oct 04 06:16:55 crc kubenswrapper[4802]: I1004 06:16:55.269714 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qkdm6_395c3b28-d30d-4457-bc17-3f88298e11a0/ovsdb-server/0.log" Oct 04 06:16:55 crc kubenswrapper[4802]: I1004 06:16:55.272050 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qkdm6_395c3b28-d30d-4457-bc17-3f88298e11a0/ovs-vswitchd/0.log" Oct 04 06:16:55 crc kubenswrapper[4802]: I1004 06:16:55.454823 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-brppl_ad62a6a8-f574-42b8-b558-fe1f7bf9d36a/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:55 crc kubenswrapper[4802]: I1004 06:16:55.478679 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c2df19f5-e90b-4a43-95ea-f4c64f492de6/openstack-network-exporter/0.log" Oct 04 06:16:55 crc kubenswrapper[4802]: I1004 06:16:55.505521 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c2df19f5-e90b-4a43-95ea-f4c64f492de6/ovn-northd/0.log" Oct 04 06:16:55 crc kubenswrapper[4802]: I1004 06:16:55.638253 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6/openstack-network-exporter/0.log" Oct 04 06:16:55 crc kubenswrapper[4802]: I1004 06:16:55.700926 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a1588a5f-251a-4dd9-93a8-0f0fbcec1bf6/ovsdbserver-nb/0.log" Oct 04 06:16:55 crc kubenswrapper[4802]: I1004 06:16:55.825436 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_812c69b4-108a-4b14-83b2-95daa7c5949d/ovsdbserver-sb/0.log" Oct 04 06:16:55 crc kubenswrapper[4802]: I1004 06:16:55.840497 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_812c69b4-108a-4b14-83b2-95daa7c5949d/openstack-network-exporter/0.log" Oct 04 06:16:55 crc kubenswrapper[4802]: I1004 06:16:55.990947 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64457d9b48-l5jfj_bb69fc77-66b3-4f06-b438-8fbd159a4c3f/placement-api/0.log" Oct 04 06:16:56 crc kubenswrapper[4802]: I1004 06:16:56.094509 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64457d9b48-l5jfj_bb69fc77-66b3-4f06-b438-8fbd159a4c3f/placement-log/0.log" Oct 04 06:16:56 crc kubenswrapper[4802]: I1004 06:16:56.113711 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e886bcf5-c8ec-465d-87cc-22b905bec5da/setup-container/0.log" Oct 04 06:16:56 crc kubenswrapper[4802]: I1004 06:16:56.312516 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e886bcf5-c8ec-465d-87cc-22b905bec5da/setup-container/0.log" Oct 04 06:16:56 crc kubenswrapper[4802]: I1004 06:16:56.323728 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ad74ddca-2d42-4c28-8147-6088b9876fa1/setup-container/0.log" Oct 04 06:16:56 crc kubenswrapper[4802]: I1004 06:16:56.333657 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e886bcf5-c8ec-465d-87cc-22b905bec5da/rabbitmq/0.log" Oct 04 06:16:56 crc kubenswrapper[4802]: I1004 06:16:56.510498 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ad74ddca-2d42-4c28-8147-6088b9876fa1/setup-container/0.log" Oct 04 06:16:56 crc kubenswrapper[4802]: I1004 06:16:56.520945 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ad74ddca-2d42-4c28-8147-6088b9876fa1/rabbitmq/0.log" Oct 04 06:16:56 crc kubenswrapper[4802]: I1004 06:16:56.536555 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bft77_86da9375-0b75-4d5b-8519-e2cba79ba8f2/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:56 crc kubenswrapper[4802]: I1004 06:16:56.702090 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-v2db5_1013ab05-0b6e-458d-b876-e7bb43cbd153/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:56 crc kubenswrapper[4802]: I1004 06:16:56.841631 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8hftb_7b7f1a35-8c31-4ec4-8ab0-778aabb8a7ce/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:16:56 crc kubenswrapper[4802]: I1004 06:16:56.935750 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-vcbwm_a9517a29-8dbc-4973-bdbf-67bf3f9bddde/ssh-known-hosts-edpm-deployment/0.log" Oct 04 06:16:57 crc kubenswrapper[4802]: I1004 06:16:57.063358 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9ff51956-c2e9-4e25-9cd4-56bb6304b7db/tempest-tests-tempest-tests-runner/0.log" Oct 04 06:16:57 crc kubenswrapper[4802]: I1004 06:16:57.118835 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_024d4dc0-8096-467a-9b96-9394a8965e48/test-operator-logs-container/0.log" Oct 04 06:16:57 crc kubenswrapper[4802]: I1004 06:16:57.246412 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6p8dr_0a7e40d1-1503-445c-997e-4094ec553767/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 04 06:17:07 crc kubenswrapper[4802]: I1004 06:17:07.360364 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:17:07 crc kubenswrapper[4802]: E1004 06:17:07.361867 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:17:22 crc kubenswrapper[4802]: I1004 06:17:22.361189 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:17:22 crc kubenswrapper[4802]: E1004 06:17:22.362478 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:17:35 crc kubenswrapper[4802]: I1004 06:17:35.359828 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:17:35 crc kubenswrapper[4802]: E1004 06:17:35.361009 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:17:46 crc kubenswrapper[4802]: I1004 06:17:46.116014 4802 generic.go:334] "Generic (PLEG): container finished" podID="f209b943-24a3-4b9d-b770-68938ad79d24" containerID="44f137cb949cc99e75f4d43ab331c0509a54157ab0b4af33f73dc25fd2b57a1e" exitCode=0 Oct 04 06:17:46 crc kubenswrapper[4802]: I1004 06:17:46.116079 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" event={"ID":"f209b943-24a3-4b9d-b770-68938ad79d24","Type":"ContainerDied","Data":"44f137cb949cc99e75f4d43ab331c0509a54157ab0b4af33f73dc25fd2b57a1e"} Oct 04 06:17:46 crc kubenswrapper[4802]: I1004 06:17:46.361492 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:17:46 crc kubenswrapper[4802]: E1004 06:17:46.362028 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:17:47 crc kubenswrapper[4802]: I1004 06:17:47.265973 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" Oct 04 06:17:47 crc kubenswrapper[4802]: I1004 06:17:47.306408 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gxgzv/crc-debug-vtg6z"] Oct 04 06:17:47 crc kubenswrapper[4802]: I1004 06:17:47.317451 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gxgzv/crc-debug-vtg6z"] Oct 04 06:17:47 crc kubenswrapper[4802]: I1004 06:17:47.348626 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f209b943-24a3-4b9d-b770-68938ad79d24-host\") pod \"f209b943-24a3-4b9d-b770-68938ad79d24\" (UID: \"f209b943-24a3-4b9d-b770-68938ad79d24\") " Oct 04 06:17:47 crc kubenswrapper[4802]: I1004 06:17:47.348832 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f209b943-24a3-4b9d-b770-68938ad79d24-host" (OuterVolumeSpecName: "host") pod "f209b943-24a3-4b9d-b770-68938ad79d24" (UID: "f209b943-24a3-4b9d-b770-68938ad79d24"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 06:17:47 crc kubenswrapper[4802]: I1004 06:17:47.348866 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m69lm\" (UniqueName: \"kubernetes.io/projected/f209b943-24a3-4b9d-b770-68938ad79d24-kube-api-access-m69lm\") pod \"f209b943-24a3-4b9d-b770-68938ad79d24\" (UID: \"f209b943-24a3-4b9d-b770-68938ad79d24\") " Oct 04 06:17:47 crc kubenswrapper[4802]: I1004 06:17:47.349476 4802 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f209b943-24a3-4b9d-b770-68938ad79d24-host\") on node \"crc\" DevicePath \"\"" Oct 04 06:17:47 crc kubenswrapper[4802]: I1004 06:17:47.359021 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f209b943-24a3-4b9d-b770-68938ad79d24-kube-api-access-m69lm" (OuterVolumeSpecName: "kube-api-access-m69lm") pod "f209b943-24a3-4b9d-b770-68938ad79d24" (UID: "f209b943-24a3-4b9d-b770-68938ad79d24"). InnerVolumeSpecName "kube-api-access-m69lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:17:47 crc kubenswrapper[4802]: I1004 06:17:47.452700 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m69lm\" (UniqueName: \"kubernetes.io/projected/f209b943-24a3-4b9d-b770-68938ad79d24-kube-api-access-m69lm\") on node \"crc\" DevicePath \"\"" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.144099 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09ec462ad4a53d6372a0c329586378b7b872e8b4ad5a6b340fadcfb17ac62b5e" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.144164 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/crc-debug-vtg6z" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.379045 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f209b943-24a3-4b9d-b770-68938ad79d24" path="/var/lib/kubelet/pods/f209b943-24a3-4b9d-b770-68938ad79d24/volumes" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.550196 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gxgzv/crc-debug-hbrrd"] Oct 04 06:17:48 crc kubenswrapper[4802]: E1004 06:17:48.551244 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ead3935-6ea3-4d65-9438-f71f8f05b175" containerName="extract-content" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.551283 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ead3935-6ea3-4d65-9438-f71f8f05b175" containerName="extract-content" Oct 04 06:17:48 crc kubenswrapper[4802]: E1004 06:17:48.551324 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ead3935-6ea3-4d65-9438-f71f8f05b175" containerName="registry-server" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.551337 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ead3935-6ea3-4d65-9438-f71f8f05b175" containerName="registry-server" Oct 04 06:17:48 crc kubenswrapper[4802]: E1004 06:17:48.551364 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f209b943-24a3-4b9d-b770-68938ad79d24" containerName="container-00" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.551375 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f209b943-24a3-4b9d-b770-68938ad79d24" containerName="container-00" Oct 04 06:17:48 crc kubenswrapper[4802]: E1004 06:17:48.551395 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ead3935-6ea3-4d65-9438-f71f8f05b175" containerName="extract-utilities" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.551407 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ead3935-6ea3-4d65-9438-f71f8f05b175" containerName="extract-utilities" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.551759 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ead3935-6ea3-4d65-9438-f71f8f05b175" containerName="registry-server" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.551782 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="f209b943-24a3-4b9d-b770-68938ad79d24" containerName="container-00" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.561961 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/crc-debug-hbrrd" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.568029 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gxgzv"/"default-dockercfg-jsjhd" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.583448 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frrbq\" (UniqueName: \"kubernetes.io/projected/1d7dc03a-1086-4b92-a6d2-2e5e7551c69d-kube-api-access-frrbq\") pod \"crc-debug-hbrrd\" (UID: \"1d7dc03a-1086-4b92-a6d2-2e5e7551c69d\") " pod="openshift-must-gather-gxgzv/crc-debug-hbrrd" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.584103 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d7dc03a-1086-4b92-a6d2-2e5e7551c69d-host\") pod \"crc-debug-hbrrd\" (UID: \"1d7dc03a-1086-4b92-a6d2-2e5e7551c69d\") " pod="openshift-must-gather-gxgzv/crc-debug-hbrrd" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.688271 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frrbq\" (UniqueName: \"kubernetes.io/projected/1d7dc03a-1086-4b92-a6d2-2e5e7551c69d-kube-api-access-frrbq\") pod \"crc-debug-hbrrd\" (UID: \"1d7dc03a-1086-4b92-a6d2-2e5e7551c69d\") " pod="openshift-must-gather-gxgzv/crc-debug-hbrrd" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.688363 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d7dc03a-1086-4b92-a6d2-2e5e7551c69d-host\") pod \"crc-debug-hbrrd\" (UID: \"1d7dc03a-1086-4b92-a6d2-2e5e7551c69d\") " pod="openshift-must-gather-gxgzv/crc-debug-hbrrd" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.688612 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d7dc03a-1086-4b92-a6d2-2e5e7551c69d-host\") pod \"crc-debug-hbrrd\" (UID: \"1d7dc03a-1086-4b92-a6d2-2e5e7551c69d\") " pod="openshift-must-gather-gxgzv/crc-debug-hbrrd" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.743438 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frrbq\" (UniqueName: \"kubernetes.io/projected/1d7dc03a-1086-4b92-a6d2-2e5e7551c69d-kube-api-access-frrbq\") pod \"crc-debug-hbrrd\" (UID: \"1d7dc03a-1086-4b92-a6d2-2e5e7551c69d\") " pod="openshift-must-gather-gxgzv/crc-debug-hbrrd" Oct 04 06:17:48 crc kubenswrapper[4802]: I1004 06:17:48.894190 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/crc-debug-hbrrd" Oct 04 06:17:49 crc kubenswrapper[4802]: I1004 06:17:49.155446 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gxgzv/crc-debug-hbrrd" event={"ID":"1d7dc03a-1086-4b92-a6d2-2e5e7551c69d","Type":"ContainerStarted","Data":"63f349ee74794f0a55bf89eaf7da2b59fe8f03a9fa1ca5892cd0a023b54db204"} Oct 04 06:17:50 crc kubenswrapper[4802]: I1004 06:17:50.169726 4802 generic.go:334] "Generic (PLEG): container finished" podID="1d7dc03a-1086-4b92-a6d2-2e5e7551c69d" containerID="ecc49ff2965dc5d7e07cf02f7dd2d711dcea4ce9a41ecda5fa0c3de5ce9eacfc" exitCode=0 Oct 04 06:17:50 crc kubenswrapper[4802]: I1004 06:17:50.169848 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gxgzv/crc-debug-hbrrd" event={"ID":"1d7dc03a-1086-4b92-a6d2-2e5e7551c69d","Type":"ContainerDied","Data":"ecc49ff2965dc5d7e07cf02f7dd2d711dcea4ce9a41ecda5fa0c3de5ce9eacfc"} Oct 04 06:17:51 crc kubenswrapper[4802]: I1004 06:17:51.303514 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/crc-debug-hbrrd" Oct 04 06:17:51 crc kubenswrapper[4802]: I1004 06:17:51.348430 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frrbq\" (UniqueName: \"kubernetes.io/projected/1d7dc03a-1086-4b92-a6d2-2e5e7551c69d-kube-api-access-frrbq\") pod \"1d7dc03a-1086-4b92-a6d2-2e5e7551c69d\" (UID: \"1d7dc03a-1086-4b92-a6d2-2e5e7551c69d\") " Oct 04 06:17:51 crc kubenswrapper[4802]: I1004 06:17:51.348739 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d7dc03a-1086-4b92-a6d2-2e5e7551c69d-host\") pod \"1d7dc03a-1086-4b92-a6d2-2e5e7551c69d\" (UID: \"1d7dc03a-1086-4b92-a6d2-2e5e7551c69d\") " Oct 04 06:17:51 crc kubenswrapper[4802]: I1004 06:17:51.348801 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d7dc03a-1086-4b92-a6d2-2e5e7551c69d-host" (OuterVolumeSpecName: "host") pod "1d7dc03a-1086-4b92-a6d2-2e5e7551c69d" (UID: "1d7dc03a-1086-4b92-a6d2-2e5e7551c69d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 06:17:51 crc kubenswrapper[4802]: I1004 06:17:51.349137 4802 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d7dc03a-1086-4b92-a6d2-2e5e7551c69d-host\") on node \"crc\" DevicePath \"\"" Oct 04 06:17:51 crc kubenswrapper[4802]: I1004 06:17:51.362754 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7dc03a-1086-4b92-a6d2-2e5e7551c69d-kube-api-access-frrbq" (OuterVolumeSpecName: "kube-api-access-frrbq") pod "1d7dc03a-1086-4b92-a6d2-2e5e7551c69d" (UID: "1d7dc03a-1086-4b92-a6d2-2e5e7551c69d"). InnerVolumeSpecName "kube-api-access-frrbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:17:51 crc kubenswrapper[4802]: I1004 06:17:51.449820 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frrbq\" (UniqueName: \"kubernetes.io/projected/1d7dc03a-1086-4b92-a6d2-2e5e7551c69d-kube-api-access-frrbq\") on node \"crc\" DevicePath \"\"" Oct 04 06:17:52 crc kubenswrapper[4802]: I1004 06:17:52.187506 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gxgzv/crc-debug-hbrrd" event={"ID":"1d7dc03a-1086-4b92-a6d2-2e5e7551c69d","Type":"ContainerDied","Data":"63f349ee74794f0a55bf89eaf7da2b59fe8f03a9fa1ca5892cd0a023b54db204"} Oct 04 06:17:52 crc kubenswrapper[4802]: I1004 06:17:52.187542 4802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63f349ee74794f0a55bf89eaf7da2b59fe8f03a9fa1ca5892cd0a023b54db204" Oct 04 06:17:52 crc kubenswrapper[4802]: I1004 06:17:52.187567 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/crc-debug-hbrrd" Oct 04 06:18:00 crc kubenswrapper[4802]: I1004 06:18:00.032525 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gxgzv/crc-debug-hbrrd"] Oct 04 06:18:00 crc kubenswrapper[4802]: I1004 06:18:00.042637 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gxgzv/crc-debug-hbrrd"] Oct 04 06:18:00 crc kubenswrapper[4802]: I1004 06:18:00.360434 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:18:00 crc kubenswrapper[4802]: E1004 06:18:00.361030 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:18:00 crc kubenswrapper[4802]: I1004 06:18:00.377862 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d7dc03a-1086-4b92-a6d2-2e5e7551c69d" path="/var/lib/kubelet/pods/1d7dc03a-1086-4b92-a6d2-2e5e7551c69d/volumes" Oct 04 06:18:01 crc kubenswrapper[4802]: I1004 06:18:01.264715 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gxgzv/crc-debug-gmn47"] Oct 04 06:18:01 crc kubenswrapper[4802]: E1004 06:18:01.265506 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7dc03a-1086-4b92-a6d2-2e5e7551c69d" containerName="container-00" Oct 04 06:18:01 crc kubenswrapper[4802]: I1004 06:18:01.265525 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7dc03a-1086-4b92-a6d2-2e5e7551c69d" containerName="container-00" Oct 04 06:18:01 crc kubenswrapper[4802]: I1004 06:18:01.265816 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7dc03a-1086-4b92-a6d2-2e5e7551c69d" containerName="container-00" Oct 04 06:18:01 crc kubenswrapper[4802]: I1004 06:18:01.266493 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/crc-debug-gmn47" Oct 04 06:18:01 crc kubenswrapper[4802]: I1004 06:18:01.268629 4802 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gxgzv"/"default-dockercfg-jsjhd" Oct 04 06:18:01 crc kubenswrapper[4802]: I1004 06:18:01.297108 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7c1755-58e2-4ab7-835e-ea09fb96086c-host\") pod \"crc-debug-gmn47\" (UID: \"ce7c1755-58e2-4ab7-835e-ea09fb96086c\") " pod="openshift-must-gather-gxgzv/crc-debug-gmn47" Oct 04 06:18:01 crc kubenswrapper[4802]: I1004 06:18:01.297357 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g79gc\" (UniqueName: \"kubernetes.io/projected/ce7c1755-58e2-4ab7-835e-ea09fb96086c-kube-api-access-g79gc\") pod \"crc-debug-gmn47\" (UID: \"ce7c1755-58e2-4ab7-835e-ea09fb96086c\") " pod="openshift-must-gather-gxgzv/crc-debug-gmn47" Oct 04 06:18:01 crc kubenswrapper[4802]: I1004 06:18:01.399464 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7c1755-58e2-4ab7-835e-ea09fb96086c-host\") pod \"crc-debug-gmn47\" (UID: \"ce7c1755-58e2-4ab7-835e-ea09fb96086c\") " pod="openshift-must-gather-gxgzv/crc-debug-gmn47" Oct 04 06:18:01 crc kubenswrapper[4802]: I1004 06:18:01.399748 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g79gc\" (UniqueName: \"kubernetes.io/projected/ce7c1755-58e2-4ab7-835e-ea09fb96086c-kube-api-access-g79gc\") pod \"crc-debug-gmn47\" (UID: \"ce7c1755-58e2-4ab7-835e-ea09fb96086c\") " pod="openshift-must-gather-gxgzv/crc-debug-gmn47" Oct 04 06:18:01 crc kubenswrapper[4802]: I1004 06:18:01.400328 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7c1755-58e2-4ab7-835e-ea09fb96086c-host\") pod \"crc-debug-gmn47\" (UID: \"ce7c1755-58e2-4ab7-835e-ea09fb96086c\") " pod="openshift-must-gather-gxgzv/crc-debug-gmn47" Oct 04 06:18:01 crc kubenswrapper[4802]: I1004 06:18:01.424723 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g79gc\" (UniqueName: \"kubernetes.io/projected/ce7c1755-58e2-4ab7-835e-ea09fb96086c-kube-api-access-g79gc\") pod \"crc-debug-gmn47\" (UID: \"ce7c1755-58e2-4ab7-835e-ea09fb96086c\") " pod="openshift-must-gather-gxgzv/crc-debug-gmn47" Oct 04 06:18:01 crc kubenswrapper[4802]: I1004 06:18:01.589073 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/crc-debug-gmn47" Oct 04 06:18:02 crc kubenswrapper[4802]: I1004 06:18:02.273160 4802 generic.go:334] "Generic (PLEG): container finished" podID="ce7c1755-58e2-4ab7-835e-ea09fb96086c" containerID="b1af7a4823552a8883b41969df77914bbdc651474aee94d917f616a9b891b701" exitCode=0 Oct 04 06:18:02 crc kubenswrapper[4802]: I1004 06:18:02.273286 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gxgzv/crc-debug-gmn47" event={"ID":"ce7c1755-58e2-4ab7-835e-ea09fb96086c","Type":"ContainerDied","Data":"b1af7a4823552a8883b41969df77914bbdc651474aee94d917f616a9b891b701"} Oct 04 06:18:02 crc kubenswrapper[4802]: I1004 06:18:02.273513 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gxgzv/crc-debug-gmn47" event={"ID":"ce7c1755-58e2-4ab7-835e-ea09fb96086c","Type":"ContainerStarted","Data":"7988dc057e51250d128b2edbc44e66229a3061570fdcc5dbbbc593e0b8bf262e"} Oct 04 06:18:02 crc kubenswrapper[4802]: I1004 06:18:02.316890 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gxgzv/crc-debug-gmn47"] Oct 04 06:18:02 crc kubenswrapper[4802]: I1004 06:18:02.325024 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gxgzv/crc-debug-gmn47"] Oct 04 06:18:03 crc kubenswrapper[4802]: I1004 06:18:03.385428 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/crc-debug-gmn47" Oct 04 06:18:03 crc kubenswrapper[4802]: I1004 06:18:03.541499 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7c1755-58e2-4ab7-835e-ea09fb96086c-host\") pod \"ce7c1755-58e2-4ab7-835e-ea09fb96086c\" (UID: \"ce7c1755-58e2-4ab7-835e-ea09fb96086c\") " Oct 04 06:18:03 crc kubenswrapper[4802]: I1004 06:18:03.541618 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce7c1755-58e2-4ab7-835e-ea09fb96086c-host" (OuterVolumeSpecName: "host") pod "ce7c1755-58e2-4ab7-835e-ea09fb96086c" (UID: "ce7c1755-58e2-4ab7-835e-ea09fb96086c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 04 06:18:03 crc kubenswrapper[4802]: I1004 06:18:03.541912 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g79gc\" (UniqueName: \"kubernetes.io/projected/ce7c1755-58e2-4ab7-835e-ea09fb96086c-kube-api-access-g79gc\") pod \"ce7c1755-58e2-4ab7-835e-ea09fb96086c\" (UID: \"ce7c1755-58e2-4ab7-835e-ea09fb96086c\") " Oct 04 06:18:03 crc kubenswrapper[4802]: I1004 06:18:03.542591 4802 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7c1755-58e2-4ab7-835e-ea09fb96086c-host\") on node \"crc\" DevicePath \"\"" Oct 04 06:18:03 crc kubenswrapper[4802]: I1004 06:18:03.551914 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7c1755-58e2-4ab7-835e-ea09fb96086c-kube-api-access-g79gc" (OuterVolumeSpecName: "kube-api-access-g79gc") pod "ce7c1755-58e2-4ab7-835e-ea09fb96086c" (UID: "ce7c1755-58e2-4ab7-835e-ea09fb96086c"). InnerVolumeSpecName "kube-api-access-g79gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:18:03 crc kubenswrapper[4802]: I1004 06:18:03.644828 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g79gc\" (UniqueName: \"kubernetes.io/projected/ce7c1755-58e2-4ab7-835e-ea09fb96086c-kube-api-access-g79gc\") on node \"crc\" DevicePath \"\"" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.086084 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq_8aefca93-0b5f-4dc0-9e93-4b726272fc8d/util/0.log" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.256848 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq_8aefca93-0b5f-4dc0-9e93-4b726272fc8d/util/0.log" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.277574 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq_8aefca93-0b5f-4dc0-9e93-4b726272fc8d/pull/0.log" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.292350 4802 scope.go:117] "RemoveContainer" containerID="b1af7a4823552a8883b41969df77914bbdc651474aee94d917f616a9b891b701" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.292484 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/crc-debug-gmn47" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.311083 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq_8aefca93-0b5f-4dc0-9e93-4b726272fc8d/pull/0.log" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.372048 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7c1755-58e2-4ab7-835e-ea09fb96086c" path="/var/lib/kubelet/pods/ce7c1755-58e2-4ab7-835e-ea09fb96086c/volumes" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.445686 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq_8aefca93-0b5f-4dc0-9e93-4b726272fc8d/util/0.log" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.478715 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq_8aefca93-0b5f-4dc0-9e93-4b726272fc8d/extract/0.log" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.484454 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f91e8721413a297fdc0268fe4b15e744f56ac02ef69d0f5de0773e44a9dtfq_8aefca93-0b5f-4dc0-9e93-4b726272fc8d/pull/0.log" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.649762 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5f7c849b98-qwpjf_e8f690af-8476-4fae-821f-cc822c9a1273/kube-rbac-proxy/0.log" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.668301 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7654479b5b-qdnwx_50a3c2cf-e05f-43ac-833a-1ae097417c9b/kube-rbac-proxy/0.log" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.735449 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5f7c849b98-qwpjf_e8f690af-8476-4fae-821f-cc822c9a1273/manager/0.log" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.858909 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-tbhkj_fe92c819-2989-4bb4-8051-6d457ac6b121/kube-rbac-proxy/0.log" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.866335 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7654479b5b-qdnwx_50a3c2cf-e05f-43ac-833a-1ae097417c9b/manager/0.log" Oct 04 06:18:04 crc kubenswrapper[4802]: I1004 06:18:04.935753 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-tbhkj_fe92c819-2989-4bb4-8051-6d457ac6b121/manager/0.log" Oct 04 06:18:05 crc kubenswrapper[4802]: I1004 06:18:05.040375 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5568b5d68-g9stf_cc8e0f09-d68e-4cab-af19-86180b78cb70/kube-rbac-proxy/0.log" Oct 04 06:18:05 crc kubenswrapper[4802]: I1004 06:18:05.118171 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5568b5d68-g9stf_cc8e0f09-d68e-4cab-af19-86180b78cb70/manager/0.log" Oct 04 06:18:05 crc kubenswrapper[4802]: I1004 06:18:05.215016 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-8f58bc9db-bczqd_d877d19f-e34d-429e-8bf7-e7f9c6c141d1/kube-rbac-proxy/0.log" Oct 04 06:18:05 crc kubenswrapper[4802]: I1004 06:18:05.253988 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-8f58bc9db-bczqd_d877d19f-e34d-429e-8bf7-e7f9c6c141d1/manager/0.log" Oct 04 06:18:05 crc kubenswrapper[4802]: I1004 06:18:05.330695 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-54876c876f-c2wf7_9f060f22-b0a6-41e0-b88e-3c4411b06f1d/kube-rbac-proxy/0.log" Oct 04 06:18:05 crc kubenswrapper[4802]: I1004 06:18:05.383531 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-54876c876f-c2wf7_9f060f22-b0a6-41e0-b88e-3c4411b06f1d/manager/0.log" Oct 04 06:18:05 crc kubenswrapper[4802]: I1004 06:18:05.483550 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-gj2l2_8975b7de-977f-45a6-b619-1bae2838c9eb/kube-rbac-proxy/0.log" Oct 04 06:18:05 crc kubenswrapper[4802]: I1004 06:18:05.616658 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-699b87f775-sgw2w_e59f4cc7-3f2d-43a7-91f6-cf589392f5fa/kube-rbac-proxy/0.log" Oct 04 06:18:05 crc kubenswrapper[4802]: I1004 06:18:05.699079 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-699b87f775-sgw2w_e59f4cc7-3f2d-43a7-91f6-cf589392f5fa/manager/0.log" Oct 04 06:18:05 crc kubenswrapper[4802]: I1004 06:18:05.702149 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-gj2l2_8975b7de-977f-45a6-b619-1bae2838c9eb/manager/0.log" Oct 04 06:18:05 crc kubenswrapper[4802]: I1004 06:18:05.950720 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-655d88ccb9-6zzzc_d1ae58bd-7435-4bb1-819c-2f085a231ce0/kube-rbac-proxy/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.042331 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-655d88ccb9-6zzzc_d1ae58bd-7435-4bb1-819c-2f085a231ce0/manager/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.098350 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-6pkx2_c7242ee7-cb17-471e-8639-fd36ccd2d398/kube-rbac-proxy/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.209074 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-6pkx2_c7242ee7-cb17-471e-8639-fd36ccd2d398/manager/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.251120 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-xqp95_f22b8bf6-a48f-41e3-88f4-601a8befda4b/kube-rbac-proxy/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.325947 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-xqp95_f22b8bf6-a48f-41e3-88f4-601a8befda4b/manager/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.413475 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-dhhv6_33f312f2-394b-4ce5-965d-69a464079f55/kube-rbac-proxy/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.478338 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-dhhv6_33f312f2-394b-4ce5-965d-69a464079f55/manager/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.586230 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-d7g7m_16c65360-b9da-46ee-807a-7a508bb5b97b/kube-rbac-proxy/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.660433 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-d7g7m_16c65360-b9da-46ee-807a-7a508bb5b97b/manager/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.681466 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-xc8rb_31ac0023-c7da-49a3-8276-063e6c7b8a38/kube-rbac-proxy/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.807772 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-xc8rb_31ac0023-c7da-49a3-8276-063e6c7b8a38/manager/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.852776 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf_e7c82134-0489-4a84-91b2-de5e9ff651a3/kube-rbac-proxy/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.857271 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665ctv7jf_e7c82134-0489-4a84-91b2-de5e9ff651a3/manager/0.log" Oct 04 06:18:06 crc kubenswrapper[4802]: I1004 06:18:06.998332 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6f965f7c8f-wrfgq_66a174f5-be2c-4fb0-92f0-4cb911033d87/kube-rbac-proxy/0.log" Oct 04 06:18:07 crc kubenswrapper[4802]: I1004 06:18:07.234773 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7b96bd67b7-wvw9c_b14a29f9-9112-4384-95e9-cd2ecb3e3c4b/kube-rbac-proxy/0.log" Oct 04 06:18:07 crc kubenswrapper[4802]: I1004 06:18:07.277001 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-pf97s_1f691215-13a7-4ac7-9399-430acb279349/registry-server/0.log" Oct 04 06:18:07 crc kubenswrapper[4802]: I1004 06:18:07.320920 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7b96bd67b7-wvw9c_b14a29f9-9112-4384-95e9-cd2ecb3e3c4b/operator/0.log" Oct 04 06:18:07 crc kubenswrapper[4802]: I1004 06:18:07.446587 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-579449c7d5-khgqf_26a2d08a-3357-48e4-8fda-e2fbe339e7a8/kube-rbac-proxy/0.log" Oct 04 06:18:07 crc kubenswrapper[4802]: I1004 06:18:07.581725 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-579449c7d5-khgqf_26a2d08a-3357-48e4-8fda-e2fbe339e7a8/manager/0.log" Oct 04 06:18:07 crc kubenswrapper[4802]: I1004 06:18:07.649672 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-dhxm9_da0eb8e5-72e3-4d6c-b896-e337363ed73c/kube-rbac-proxy/0.log" Oct 04 06:18:07 crc kubenswrapper[4802]: I1004 06:18:07.697592 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-dhxm9_da0eb8e5-72e3-4d6c-b896-e337363ed73c/manager/0.log" Oct 04 06:18:07 crc kubenswrapper[4802]: I1004 06:18:07.828554 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-xbjts_f72da0d9-79ad-4717-9b92-f45533584fb7/operator/0.log" Oct 04 06:18:07 crc kubenswrapper[4802]: I1004 06:18:07.923115 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-m758l_dd3b5879-0e5a-41f9-ab00-16fc063260cc/kube-rbac-proxy/0.log" Oct 04 06:18:08 crc kubenswrapper[4802]: I1004 06:18:08.097460 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-mfm5d_2a7bf534-0e93-4374-9eca-3015e9739b8b/kube-rbac-proxy/0.log" Oct 04 06:18:08 crc kubenswrapper[4802]: I1004 06:18:08.109893 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-m758l_dd3b5879-0e5a-41f9-ab00-16fc063260cc/manager/0.log" Oct 04 06:18:08 crc kubenswrapper[4802]: I1004 06:18:08.210748 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-mfm5d_2a7bf534-0e93-4374-9eca-3015e9739b8b/manager/0.log" Oct 04 06:18:08 crc kubenswrapper[4802]: I1004 06:18:08.253974 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6f965f7c8f-wrfgq_66a174f5-be2c-4fb0-92f0-4cb911033d87/manager/0.log" Oct 04 06:18:08 crc kubenswrapper[4802]: I1004 06:18:08.299155 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-pfhnt_d90631a7-c7d2-4e82-a841-21980a76d784/kube-rbac-proxy/0.log" Oct 04 06:18:08 crc kubenswrapper[4802]: I1004 06:18:08.305391 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-pfhnt_d90631a7-c7d2-4e82-a841-21980a76d784/manager/0.log" Oct 04 06:18:08 crc kubenswrapper[4802]: I1004 06:18:08.431813 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6cbc6dd547-cjxd2_566c2a23-aea4-4ea6-9820-666a22d36d99/kube-rbac-proxy/0.log" Oct 04 06:18:08 crc kubenswrapper[4802]: I1004 06:18:08.464897 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6cbc6dd547-cjxd2_566c2a23-aea4-4ea6-9820-666a22d36d99/manager/0.log" Oct 04 06:18:13 crc kubenswrapper[4802]: I1004 06:18:13.360023 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:18:13 crc kubenswrapper[4802]: E1004 06:18:13.360946 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:18:25 crc kubenswrapper[4802]: I1004 06:18:25.038426 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8zrjj_7e02eab6-078e-41f3-b53b-1fd83ce2a730/control-plane-machine-set-operator/0.log" Oct 04 06:18:25 crc kubenswrapper[4802]: I1004 06:18:25.191201 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lm8jn_6f5d3f6c-6b78-44d8-826a-e49742556aaa/machine-api-operator/0.log" Oct 04 06:18:25 crc kubenswrapper[4802]: I1004 06:18:25.202663 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lm8jn_6f5d3f6c-6b78-44d8-826a-e49742556aaa/kube-rbac-proxy/0.log" Oct 04 06:18:25 crc kubenswrapper[4802]: I1004 06:18:25.359503 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:18:25 crc kubenswrapper[4802]: E1004 06:18:25.359890 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:18:32 crc kubenswrapper[4802]: I1004 06:18:32.922121 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2h9d7"] Oct 04 06:18:32 crc kubenswrapper[4802]: E1004 06:18:32.922943 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7c1755-58e2-4ab7-835e-ea09fb96086c" containerName="container-00" Oct 04 06:18:32 crc kubenswrapper[4802]: I1004 06:18:32.922954 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7c1755-58e2-4ab7-835e-ea09fb96086c" containerName="container-00" Oct 04 06:18:32 crc kubenswrapper[4802]: I1004 06:18:32.940723 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7c1755-58e2-4ab7-835e-ea09fb96086c" containerName="container-00" Oct 04 06:18:32 crc kubenswrapper[4802]: I1004 06:18:32.942208 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:32 crc kubenswrapper[4802]: I1004 06:18:32.951357 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2h9d7"] Oct 04 06:18:32 crc kubenswrapper[4802]: I1004 06:18:32.999178 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz8vk\" (UniqueName: \"kubernetes.io/projected/0857c271-d71e-4af3-babf-f6b7c9953fb4-kube-api-access-kz8vk\") pod \"certified-operators-2h9d7\" (UID: \"0857c271-d71e-4af3-babf-f6b7c9953fb4\") " pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:32 crc kubenswrapper[4802]: I1004 06:18:32.999285 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857c271-d71e-4af3-babf-f6b7c9953fb4-utilities\") pod \"certified-operators-2h9d7\" (UID: \"0857c271-d71e-4af3-babf-f6b7c9953fb4\") " pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:32 crc kubenswrapper[4802]: I1004 06:18:32.999311 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857c271-d71e-4af3-babf-f6b7c9953fb4-catalog-content\") pod \"certified-operators-2h9d7\" (UID: \"0857c271-d71e-4af3-babf-f6b7c9953fb4\") " pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:33 crc kubenswrapper[4802]: I1004 06:18:33.101249 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz8vk\" (UniqueName: \"kubernetes.io/projected/0857c271-d71e-4af3-babf-f6b7c9953fb4-kube-api-access-kz8vk\") pod \"certified-operators-2h9d7\" (UID: \"0857c271-d71e-4af3-babf-f6b7c9953fb4\") " pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:33 crc kubenswrapper[4802]: I1004 06:18:33.101362 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857c271-d71e-4af3-babf-f6b7c9953fb4-utilities\") pod \"certified-operators-2h9d7\" (UID: \"0857c271-d71e-4af3-babf-f6b7c9953fb4\") " pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:33 crc kubenswrapper[4802]: I1004 06:18:33.101385 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857c271-d71e-4af3-babf-f6b7c9953fb4-catalog-content\") pod \"certified-operators-2h9d7\" (UID: \"0857c271-d71e-4af3-babf-f6b7c9953fb4\") " pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:33 crc kubenswrapper[4802]: I1004 06:18:33.102189 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857c271-d71e-4af3-babf-f6b7c9953fb4-utilities\") pod \"certified-operators-2h9d7\" (UID: \"0857c271-d71e-4af3-babf-f6b7c9953fb4\") " pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:33 crc kubenswrapper[4802]: I1004 06:18:33.102220 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857c271-d71e-4af3-babf-f6b7c9953fb4-catalog-content\") pod \"certified-operators-2h9d7\" (UID: \"0857c271-d71e-4af3-babf-f6b7c9953fb4\") " pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:33 crc kubenswrapper[4802]: I1004 06:18:33.118913 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz8vk\" (UniqueName: \"kubernetes.io/projected/0857c271-d71e-4af3-babf-f6b7c9953fb4-kube-api-access-kz8vk\") pod \"certified-operators-2h9d7\" (UID: \"0857c271-d71e-4af3-babf-f6b7c9953fb4\") " pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:33 crc kubenswrapper[4802]: I1004 06:18:33.282060 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:33 crc kubenswrapper[4802]: I1004 06:18:33.790459 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2h9d7"] Oct 04 06:18:34 crc kubenswrapper[4802]: I1004 06:18:34.572762 4802 generic.go:334] "Generic (PLEG): container finished" podID="0857c271-d71e-4af3-babf-f6b7c9953fb4" containerID="5069b69a9f8c31d1b1af60267a2f676b52ea07ec8064854ec233db5888294895" exitCode=0 Oct 04 06:18:34 crc kubenswrapper[4802]: I1004 06:18:34.573075 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2h9d7" event={"ID":"0857c271-d71e-4af3-babf-f6b7c9953fb4","Type":"ContainerDied","Data":"5069b69a9f8c31d1b1af60267a2f676b52ea07ec8064854ec233db5888294895"} Oct 04 06:18:34 crc kubenswrapper[4802]: I1004 06:18:34.573112 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2h9d7" event={"ID":"0857c271-d71e-4af3-babf-f6b7c9953fb4","Type":"ContainerStarted","Data":"a03f1b204163dff7c685b34518306e8e30f8bb9a5d4fb8bcdf1205fc117c2b39"} Oct 04 06:18:34 crc kubenswrapper[4802]: I1004 06:18:34.576021 4802 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 04 06:18:35 crc kubenswrapper[4802]: I1004 06:18:35.584628 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2h9d7" event={"ID":"0857c271-d71e-4af3-babf-f6b7c9953fb4","Type":"ContainerStarted","Data":"f30c2d86316030d2d98e10e4b2b68dfbc8b6cb12795d556d4093d9f36636bfcd"} Oct 04 06:18:36 crc kubenswrapper[4802]: I1004 06:18:36.601582 4802 generic.go:334] "Generic (PLEG): container finished" podID="0857c271-d71e-4af3-babf-f6b7c9953fb4" containerID="f30c2d86316030d2d98e10e4b2b68dfbc8b6cb12795d556d4093d9f36636bfcd" exitCode=0 Oct 04 06:18:36 crc kubenswrapper[4802]: I1004 06:18:36.601670 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2h9d7" event={"ID":"0857c271-d71e-4af3-babf-f6b7c9953fb4","Type":"ContainerDied","Data":"f30c2d86316030d2d98e10e4b2b68dfbc8b6cb12795d556d4093d9f36636bfcd"} Oct 04 06:18:37 crc kubenswrapper[4802]: I1004 06:18:37.610955 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2h9d7" event={"ID":"0857c271-d71e-4af3-babf-f6b7c9953fb4","Type":"ContainerStarted","Data":"6827b66a3283d2a13804f326623dde22977f8435d0576cff58db5298731f9460"} Oct 04 06:18:37 crc kubenswrapper[4802]: I1004 06:18:37.637685 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2h9d7" podStartSLOduration=3.123601186 podStartE2EDuration="5.637660202s" podCreationTimestamp="2025-10-04 06:18:32 +0000 UTC" firstStartedPulling="2025-10-04 06:18:34.575663286 +0000 UTC m=+5556.983663941" lastFinishedPulling="2025-10-04 06:18:37.089722332 +0000 UTC m=+5559.497722957" observedRunningTime="2025-10-04 06:18:37.626387481 +0000 UTC m=+5560.034388106" watchObservedRunningTime="2025-10-04 06:18:37.637660202 +0000 UTC m=+5560.045660827" Oct 04 06:18:38 crc kubenswrapper[4802]: I1004 06:18:38.364982 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:18:38 crc kubenswrapper[4802]: E1004 06:18:38.365285 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:18:38 crc kubenswrapper[4802]: I1004 06:18:38.980059 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-btfxr_61f7ab8b-a895-4573-924e-2fbc1fd17e84/cert-manager-controller/0.log" Oct 04 06:18:39 crc kubenswrapper[4802]: I1004 06:18:39.171063 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-qvsrv_2cc33a76-77de-4149-a501-28de25d1b772/cert-manager-cainjector/0.log" Oct 04 06:18:39 crc kubenswrapper[4802]: I1004 06:18:39.189576 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-4zrhg_618bdd8f-d326-4da6-a8ee-b8aee0f1e09f/cert-manager-webhook/0.log" Oct 04 06:18:43 crc kubenswrapper[4802]: I1004 06:18:43.283218 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:43 crc kubenswrapper[4802]: I1004 06:18:43.283706 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:43 crc kubenswrapper[4802]: I1004 06:18:43.337478 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:43 crc kubenswrapper[4802]: I1004 06:18:43.767052 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:43 crc kubenswrapper[4802]: I1004 06:18:43.833036 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2h9d7"] Oct 04 06:18:45 crc kubenswrapper[4802]: I1004 06:18:45.722535 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2h9d7" podUID="0857c271-d71e-4af3-babf-f6b7c9953fb4" containerName="registry-server" containerID="cri-o://6827b66a3283d2a13804f326623dde22977f8435d0576cff58db5298731f9460" gracePeriod=2 Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.343075 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.531859 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857c271-d71e-4af3-babf-f6b7c9953fb4-utilities\") pod \"0857c271-d71e-4af3-babf-f6b7c9953fb4\" (UID: \"0857c271-d71e-4af3-babf-f6b7c9953fb4\") " Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.532003 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857c271-d71e-4af3-babf-f6b7c9953fb4-catalog-content\") pod \"0857c271-d71e-4af3-babf-f6b7c9953fb4\" (UID: \"0857c271-d71e-4af3-babf-f6b7c9953fb4\") " Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.532091 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz8vk\" (UniqueName: \"kubernetes.io/projected/0857c271-d71e-4af3-babf-f6b7c9953fb4-kube-api-access-kz8vk\") pod \"0857c271-d71e-4af3-babf-f6b7c9953fb4\" (UID: \"0857c271-d71e-4af3-babf-f6b7c9953fb4\") " Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.532750 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0857c271-d71e-4af3-babf-f6b7c9953fb4-utilities" (OuterVolumeSpecName: "utilities") pod "0857c271-d71e-4af3-babf-f6b7c9953fb4" (UID: "0857c271-d71e-4af3-babf-f6b7c9953fb4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.538919 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0857c271-d71e-4af3-babf-f6b7c9953fb4-kube-api-access-kz8vk" (OuterVolumeSpecName: "kube-api-access-kz8vk") pod "0857c271-d71e-4af3-babf-f6b7c9953fb4" (UID: "0857c271-d71e-4af3-babf-f6b7c9953fb4"). InnerVolumeSpecName "kube-api-access-kz8vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.571963 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0857c271-d71e-4af3-babf-f6b7c9953fb4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0857c271-d71e-4af3-babf-f6b7c9953fb4" (UID: "0857c271-d71e-4af3-babf-f6b7c9953fb4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.634571 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz8vk\" (UniqueName: \"kubernetes.io/projected/0857c271-d71e-4af3-babf-f6b7c9953fb4-kube-api-access-kz8vk\") on node \"crc\" DevicePath \"\"" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.634604 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857c271-d71e-4af3-babf-f6b7c9953fb4-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.634616 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857c271-d71e-4af3-babf-f6b7c9953fb4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.752094 4802 generic.go:334] "Generic (PLEG): container finished" podID="0857c271-d71e-4af3-babf-f6b7c9953fb4" containerID="6827b66a3283d2a13804f326623dde22977f8435d0576cff58db5298731f9460" exitCode=0 Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.752137 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2h9d7" event={"ID":"0857c271-d71e-4af3-babf-f6b7c9953fb4","Type":"ContainerDied","Data":"6827b66a3283d2a13804f326623dde22977f8435d0576cff58db5298731f9460"} Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.752164 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2h9d7" event={"ID":"0857c271-d71e-4af3-babf-f6b7c9953fb4","Type":"ContainerDied","Data":"a03f1b204163dff7c685b34518306e8e30f8bb9a5d4fb8bcdf1205fc117c2b39"} Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.752180 4802 scope.go:117] "RemoveContainer" containerID="6827b66a3283d2a13804f326623dde22977f8435d0576cff58db5298731f9460" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.752311 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2h9d7" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.789171 4802 scope.go:117] "RemoveContainer" containerID="f30c2d86316030d2d98e10e4b2b68dfbc8b6cb12795d556d4093d9f36636bfcd" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.789288 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2h9d7"] Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.801226 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2h9d7"] Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.820082 4802 scope.go:117] "RemoveContainer" containerID="5069b69a9f8c31d1b1af60267a2f676b52ea07ec8064854ec233db5888294895" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.861405 4802 scope.go:117] "RemoveContainer" containerID="6827b66a3283d2a13804f326623dde22977f8435d0576cff58db5298731f9460" Oct 04 06:18:46 crc kubenswrapper[4802]: E1004 06:18:46.861826 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6827b66a3283d2a13804f326623dde22977f8435d0576cff58db5298731f9460\": container with ID starting with 6827b66a3283d2a13804f326623dde22977f8435d0576cff58db5298731f9460 not found: ID does not exist" containerID="6827b66a3283d2a13804f326623dde22977f8435d0576cff58db5298731f9460" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.861854 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6827b66a3283d2a13804f326623dde22977f8435d0576cff58db5298731f9460"} err="failed to get container status \"6827b66a3283d2a13804f326623dde22977f8435d0576cff58db5298731f9460\": rpc error: code = NotFound desc = could not find container \"6827b66a3283d2a13804f326623dde22977f8435d0576cff58db5298731f9460\": container with ID starting with 6827b66a3283d2a13804f326623dde22977f8435d0576cff58db5298731f9460 not found: ID does not exist" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.861874 4802 scope.go:117] "RemoveContainer" containerID="f30c2d86316030d2d98e10e4b2b68dfbc8b6cb12795d556d4093d9f36636bfcd" Oct 04 06:18:46 crc kubenswrapper[4802]: E1004 06:18:46.862222 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f30c2d86316030d2d98e10e4b2b68dfbc8b6cb12795d556d4093d9f36636bfcd\": container with ID starting with f30c2d86316030d2d98e10e4b2b68dfbc8b6cb12795d556d4093d9f36636bfcd not found: ID does not exist" containerID="f30c2d86316030d2d98e10e4b2b68dfbc8b6cb12795d556d4093d9f36636bfcd" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.862246 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30c2d86316030d2d98e10e4b2b68dfbc8b6cb12795d556d4093d9f36636bfcd"} err="failed to get container status \"f30c2d86316030d2d98e10e4b2b68dfbc8b6cb12795d556d4093d9f36636bfcd\": rpc error: code = NotFound desc = could not find container \"f30c2d86316030d2d98e10e4b2b68dfbc8b6cb12795d556d4093d9f36636bfcd\": container with ID starting with f30c2d86316030d2d98e10e4b2b68dfbc8b6cb12795d556d4093d9f36636bfcd not found: ID does not exist" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.862260 4802 scope.go:117] "RemoveContainer" containerID="5069b69a9f8c31d1b1af60267a2f676b52ea07ec8064854ec233db5888294895" Oct 04 06:18:46 crc kubenswrapper[4802]: E1004 06:18:46.862457 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5069b69a9f8c31d1b1af60267a2f676b52ea07ec8064854ec233db5888294895\": container with ID starting with 5069b69a9f8c31d1b1af60267a2f676b52ea07ec8064854ec233db5888294895 not found: ID does not exist" containerID="5069b69a9f8c31d1b1af60267a2f676b52ea07ec8064854ec233db5888294895" Oct 04 06:18:46 crc kubenswrapper[4802]: I1004 06:18:46.862480 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5069b69a9f8c31d1b1af60267a2f676b52ea07ec8064854ec233db5888294895"} err="failed to get container status \"5069b69a9f8c31d1b1af60267a2f676b52ea07ec8064854ec233db5888294895\": rpc error: code = NotFound desc = could not find container \"5069b69a9f8c31d1b1af60267a2f676b52ea07ec8064854ec233db5888294895\": container with ID starting with 5069b69a9f8c31d1b1af60267a2f676b52ea07ec8064854ec233db5888294895 not found: ID does not exist" Oct 04 06:18:48 crc kubenswrapper[4802]: I1004 06:18:48.376743 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0857c271-d71e-4af3-babf-f6b7c9953fb4" path="/var/lib/kubelet/pods/0857c271-d71e-4af3-babf-f6b7c9953fb4/volumes" Oct 04 06:18:52 crc kubenswrapper[4802]: I1004 06:18:52.361135 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:18:52 crc kubenswrapper[4802]: E1004 06:18:52.361504 4802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dc98r_openshift-machine-config-operator(611d63c9-e554-40be-aab2-f2ca43f6827b)\"" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" Oct 04 06:18:52 crc kubenswrapper[4802]: I1004 06:18:52.967107 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-bcj7b_5a599523-a3a9-4820-9370-59a99fa3e327/nmstate-console-plugin/0.log" Oct 04 06:18:53 crc kubenswrapper[4802]: I1004 06:18:53.108219 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-fhr6c_158e0059-2885-435b-bd19-1f6208a33f36/nmstate-handler/0.log" Oct 04 06:18:53 crc kubenswrapper[4802]: I1004 06:18:53.170000 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-6sht9_4123c7f6-6452-4dc2-a07d-d0603691c48e/kube-rbac-proxy/0.log" Oct 04 06:18:53 crc kubenswrapper[4802]: I1004 06:18:53.171589 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-6sht9_4123c7f6-6452-4dc2-a07d-d0603691c48e/nmstate-metrics/0.log" Oct 04 06:18:53 crc kubenswrapper[4802]: I1004 06:18:53.329818 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-ck9p8_bf01160a-8834-4760-b5c6-c6870ac75db3/nmstate-operator/0.log" Oct 04 06:18:53 crc kubenswrapper[4802]: I1004 06:18:53.358959 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-fcwvr_6a5c2438-a3fd-493a-bc06-dee5dfe74fac/nmstate-webhook/0.log" Oct 04 06:19:03 crc kubenswrapper[4802]: I1004 06:19:03.360519 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:19:03 crc kubenswrapper[4802]: I1004 06:19:03.934885 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"52f04261b35dc0039092e24fa1e250e8637dbb1b4effa3bf770592a436e388dc"} Oct 04 06:19:07 crc kubenswrapper[4802]: I1004 06:19:07.416246 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-9bbxh_ecbf7b6a-2a3e-44c3-8516-dcd1ec840842/kube-rbac-proxy/0.log" Oct 04 06:19:07 crc kubenswrapper[4802]: I1004 06:19:07.489661 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-9bbxh_ecbf7b6a-2a3e-44c3-8516-dcd1ec840842/controller/0.log" Oct 04 06:19:07 crc kubenswrapper[4802]: I1004 06:19:07.589731 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/cp-frr-files/0.log" Oct 04 06:19:07 crc kubenswrapper[4802]: I1004 06:19:07.724794 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/cp-frr-files/0.log" Oct 04 06:19:07 crc kubenswrapper[4802]: I1004 06:19:07.759260 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/cp-reloader/0.log" Oct 04 06:19:07 crc kubenswrapper[4802]: I1004 06:19:07.759287 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/cp-metrics/0.log" Oct 04 06:19:07 crc kubenswrapper[4802]: I1004 06:19:07.819658 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/cp-reloader/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.011292 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/cp-frr-files/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.011931 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/cp-reloader/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.016506 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/cp-metrics/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.044070 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/cp-metrics/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.181917 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/cp-frr-files/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.220143 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/cp-reloader/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.230585 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/controller/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.261326 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/cp-metrics/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.437488 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/frr-metrics/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.443512 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/kube-rbac-proxy/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.505134 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/kube-rbac-proxy-frr/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.657666 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/reloader/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.739989 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-pbl8c_df9efd5f-32a9-4858-930d-84d1fad7f160/frr-k8s-webhook-server/0.log" Oct 04 06:19:08 crc kubenswrapper[4802]: I1004 06:19:08.908609 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-68b4b95c5-hj8lq_993c123a-61e1-4430-8be4-e17388014589/manager/0.log" Oct 04 06:19:09 crc kubenswrapper[4802]: I1004 06:19:09.103524 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6cf665f679-9vg2w_54dc0606-5967-4e11-892e-683ab5ba6092/webhook-server/0.log" Oct 04 06:19:09 crc kubenswrapper[4802]: I1004 06:19:09.176148 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-h7l7h_08354f64-a424-482b-86d1-49d082f168be/kube-rbac-proxy/0.log" Oct 04 06:19:09 crc kubenswrapper[4802]: I1004 06:19:09.769891 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-h7l7h_08354f64-a424-482b-86d1-49d082f168be/speaker/0.log" Oct 04 06:19:09 crc kubenswrapper[4802]: I1004 06:19:09.839133 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrt4r_485db5a3-22ab-44c2-8f05-7cbd0e5054be/frr/0.log" Oct 04 06:19:23 crc kubenswrapper[4802]: I1004 06:19:23.557136 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m_ad46a69d-5845-4fc3-861b-3d8ebd4106c6/util/0.log" Oct 04 06:19:23 crc kubenswrapper[4802]: I1004 06:19:23.754065 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m_ad46a69d-5845-4fc3-861b-3d8ebd4106c6/pull/0.log" Oct 04 06:19:23 crc kubenswrapper[4802]: I1004 06:19:23.779308 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m_ad46a69d-5845-4fc3-861b-3d8ebd4106c6/util/0.log" Oct 04 06:19:23 crc kubenswrapper[4802]: I1004 06:19:23.797469 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m_ad46a69d-5845-4fc3-861b-3d8ebd4106c6/pull/0.log" Oct 04 06:19:23 crc kubenswrapper[4802]: I1004 06:19:23.943099 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m_ad46a69d-5845-4fc3-861b-3d8ebd4106c6/extract/0.log" Oct 04 06:19:23 crc kubenswrapper[4802]: I1004 06:19:23.968564 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m_ad46a69d-5845-4fc3-861b-3d8ebd4106c6/pull/0.log" Oct 04 06:19:23 crc kubenswrapper[4802]: I1004 06:19:23.985218 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2lhd4m_ad46a69d-5845-4fc3-861b-3d8ebd4106c6/util/0.log" Oct 04 06:19:24 crc kubenswrapper[4802]: I1004 06:19:24.128252 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wb9js_0f090852-6771-4013-9a95-c1c0d1bd656d/extract-utilities/0.log" Oct 04 06:19:24 crc kubenswrapper[4802]: I1004 06:19:24.350415 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wb9js_0f090852-6771-4013-9a95-c1c0d1bd656d/extract-utilities/0.log" Oct 04 06:19:24 crc kubenswrapper[4802]: I1004 06:19:24.404550 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wb9js_0f090852-6771-4013-9a95-c1c0d1bd656d/extract-content/0.log" Oct 04 06:19:24 crc kubenswrapper[4802]: I1004 06:19:24.441562 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wb9js_0f090852-6771-4013-9a95-c1c0d1bd656d/extract-content/0.log" Oct 04 06:19:24 crc kubenswrapper[4802]: I1004 06:19:24.606616 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wb9js_0f090852-6771-4013-9a95-c1c0d1bd656d/extract-content/0.log" Oct 04 06:19:24 crc kubenswrapper[4802]: I1004 06:19:24.611895 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wb9js_0f090852-6771-4013-9a95-c1c0d1bd656d/extract-utilities/0.log" Oct 04 06:19:24 crc kubenswrapper[4802]: I1004 06:19:24.847120 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkgfv_c79b1edc-f043-486b-846f-989f2791b3e9/extract-utilities/0.log" Oct 04 06:19:25 crc kubenswrapper[4802]: I1004 06:19:25.080080 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkgfv_c79b1edc-f043-486b-846f-989f2791b3e9/extract-content/0.log" Oct 04 06:19:25 crc kubenswrapper[4802]: I1004 06:19:25.085223 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkgfv_c79b1edc-f043-486b-846f-989f2791b3e9/extract-content/0.log" Oct 04 06:19:25 crc kubenswrapper[4802]: I1004 06:19:25.109103 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkgfv_c79b1edc-f043-486b-846f-989f2791b3e9/extract-utilities/0.log" Oct 04 06:19:25 crc kubenswrapper[4802]: I1004 06:19:25.419534 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wb9js_0f090852-6771-4013-9a95-c1c0d1bd656d/registry-server/0.log" Oct 04 06:19:25 crc kubenswrapper[4802]: I1004 06:19:25.436745 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkgfv_c79b1edc-f043-486b-846f-989f2791b3e9/extract-content/0.log" Oct 04 06:19:25 crc kubenswrapper[4802]: I1004 06:19:25.517137 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkgfv_c79b1edc-f043-486b-846f-989f2791b3e9/extract-utilities/0.log" Oct 04 06:19:25 crc kubenswrapper[4802]: I1004 06:19:25.702714 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr_a7e85ec1-39eb-4446-955f-b3714e5308af/util/0.log" Oct 04 06:19:25 crc kubenswrapper[4802]: I1004 06:19:25.785571 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkgfv_c79b1edc-f043-486b-846f-989f2791b3e9/registry-server/0.log" Oct 04 06:19:25 crc kubenswrapper[4802]: I1004 06:19:25.960588 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr_a7e85ec1-39eb-4446-955f-b3714e5308af/util/0.log" Oct 04 06:19:25 crc kubenswrapper[4802]: I1004 06:19:25.970703 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr_a7e85ec1-39eb-4446-955f-b3714e5308af/pull/0.log" Oct 04 06:19:25 crc kubenswrapper[4802]: I1004 06:19:25.992249 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr_a7e85ec1-39eb-4446-955f-b3714e5308af/pull/0.log" Oct 04 06:19:26 crc kubenswrapper[4802]: I1004 06:19:26.172518 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr_a7e85ec1-39eb-4446-955f-b3714e5308af/util/0.log" Oct 04 06:19:26 crc kubenswrapper[4802]: I1004 06:19:26.185660 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr_a7e85ec1-39eb-4446-955f-b3714e5308af/pull/0.log" Oct 04 06:19:26 crc kubenswrapper[4802]: I1004 06:19:26.228279 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cd4jxr_a7e85ec1-39eb-4446-955f-b3714e5308af/extract/0.log" Oct 04 06:19:26 crc kubenswrapper[4802]: I1004 06:19:26.400050 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzmnj_a02053d7-6f09-4c04-a01d-af9a90812a86/extract-utilities/0.log" Oct 04 06:19:26 crc kubenswrapper[4802]: I1004 06:19:26.403586 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-x2tsz_709abe0b-3c8d-4646-bdce-0a38e7e406f8/marketplace-operator/0.log" Oct 04 06:19:26 crc kubenswrapper[4802]: I1004 06:19:26.633852 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzmnj_a02053d7-6f09-4c04-a01d-af9a90812a86/extract-content/0.log" Oct 04 06:19:26 crc kubenswrapper[4802]: I1004 06:19:26.766280 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzmnj_a02053d7-6f09-4c04-a01d-af9a90812a86/extract-content/0.log" Oct 04 06:19:26 crc kubenswrapper[4802]: I1004 06:19:26.880678 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzmnj_a02053d7-6f09-4c04-a01d-af9a90812a86/extract-utilities/0.log" Oct 04 06:19:27 crc kubenswrapper[4802]: I1004 06:19:27.102392 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzmnj_a02053d7-6f09-4c04-a01d-af9a90812a86/extract-content/0.log" Oct 04 06:19:27 crc kubenswrapper[4802]: I1004 06:19:27.133373 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzmnj_a02053d7-6f09-4c04-a01d-af9a90812a86/extract-utilities/0.log" Oct 04 06:19:27 crc kubenswrapper[4802]: I1004 06:19:27.140236 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tmd6j_b222c027-fa4e-4fd2-bb99-bf44d6c44d5d/extract-utilities/0.log" Oct 04 06:19:27 crc kubenswrapper[4802]: I1004 06:19:27.292261 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzmnj_a02053d7-6f09-4c04-a01d-af9a90812a86/registry-server/0.log" Oct 04 06:19:27 crc kubenswrapper[4802]: I1004 06:19:27.390042 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tmd6j_b222c027-fa4e-4fd2-bb99-bf44d6c44d5d/extract-utilities/0.log" Oct 04 06:19:27 crc kubenswrapper[4802]: I1004 06:19:27.393801 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tmd6j_b222c027-fa4e-4fd2-bb99-bf44d6c44d5d/extract-content/0.log" Oct 04 06:19:27 crc kubenswrapper[4802]: I1004 06:19:27.407833 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tmd6j_b222c027-fa4e-4fd2-bb99-bf44d6c44d5d/extract-content/0.log" Oct 04 06:19:27 crc kubenswrapper[4802]: I1004 06:19:27.518582 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tmd6j_b222c027-fa4e-4fd2-bb99-bf44d6c44d5d/extract-utilities/0.log" Oct 04 06:19:27 crc kubenswrapper[4802]: I1004 06:19:27.539067 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tmd6j_b222c027-fa4e-4fd2-bb99-bf44d6c44d5d/extract-content/0.log" Oct 04 06:19:28 crc kubenswrapper[4802]: I1004 06:19:28.299007 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tmd6j_b222c027-fa4e-4fd2-bb99-bf44d6c44d5d/registry-server/0.log" Oct 04 06:21:22 crc kubenswrapper[4802]: I1004 06:21:22.662219 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 06:21:22 crc kubenswrapper[4802]: I1004 06:21:22.663086 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 06:21:42 crc kubenswrapper[4802]: I1004 06:21:42.682660 4802 generic.go:334] "Generic (PLEG): container finished" podID="f05464ee-e487-453b-bc52-76e8eae65f4e" containerID="6e4e042e56e306a9451742a4212c8433a988617ffaeb7894a06a575e812692a5" exitCode=0 Oct 04 06:21:42 crc kubenswrapper[4802]: I1004 06:21:42.682738 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gxgzv/must-gather-mzj8b" event={"ID":"f05464ee-e487-453b-bc52-76e8eae65f4e","Type":"ContainerDied","Data":"6e4e042e56e306a9451742a4212c8433a988617ffaeb7894a06a575e812692a5"} Oct 04 06:21:42 crc kubenswrapper[4802]: I1004 06:21:42.685000 4802 scope.go:117] "RemoveContainer" containerID="6e4e042e56e306a9451742a4212c8433a988617ffaeb7894a06a575e812692a5" Oct 04 06:21:42 crc kubenswrapper[4802]: I1004 06:21:42.797296 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gxgzv_must-gather-mzj8b_f05464ee-e487-453b-bc52-76e8eae65f4e/gather/0.log" Oct 04 06:21:51 crc kubenswrapper[4802]: I1004 06:21:51.473382 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gxgzv/must-gather-mzj8b"] Oct 04 06:21:51 crc kubenswrapper[4802]: I1004 06:21:51.474207 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-gxgzv/must-gather-mzj8b" podUID="f05464ee-e487-453b-bc52-76e8eae65f4e" containerName="copy" containerID="cri-o://52c4b2a00cc4b43af43698ea23c699f2eb721ab67dac6b2d07d79af508c75d8d" gracePeriod=2 Oct 04 06:21:51 crc kubenswrapper[4802]: I1004 06:21:51.494263 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gxgzv/must-gather-mzj8b"] Oct 04 06:21:51 crc kubenswrapper[4802]: I1004 06:21:51.783632 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gxgzv_must-gather-mzj8b_f05464ee-e487-453b-bc52-76e8eae65f4e/copy/0.log" Oct 04 06:21:51 crc kubenswrapper[4802]: I1004 06:21:51.784611 4802 generic.go:334] "Generic (PLEG): container finished" podID="f05464ee-e487-453b-bc52-76e8eae65f4e" containerID="52c4b2a00cc4b43af43698ea23c699f2eb721ab67dac6b2d07d79af508c75d8d" exitCode=143 Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.072471 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gxgzv_must-gather-mzj8b_f05464ee-e487-453b-bc52-76e8eae65f4e/copy/0.log" Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.073014 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/must-gather-mzj8b" Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.170307 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2mqs\" (UniqueName: \"kubernetes.io/projected/f05464ee-e487-453b-bc52-76e8eae65f4e-kube-api-access-q2mqs\") pod \"f05464ee-e487-453b-bc52-76e8eae65f4e\" (UID: \"f05464ee-e487-453b-bc52-76e8eae65f4e\") " Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.170565 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f05464ee-e487-453b-bc52-76e8eae65f4e-must-gather-output\") pod \"f05464ee-e487-453b-bc52-76e8eae65f4e\" (UID: \"f05464ee-e487-453b-bc52-76e8eae65f4e\") " Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.175748 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f05464ee-e487-453b-bc52-76e8eae65f4e-kube-api-access-q2mqs" (OuterVolumeSpecName: "kube-api-access-q2mqs") pod "f05464ee-e487-453b-bc52-76e8eae65f4e" (UID: "f05464ee-e487-453b-bc52-76e8eae65f4e"). InnerVolumeSpecName "kube-api-access-q2mqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.273790 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2mqs\" (UniqueName: \"kubernetes.io/projected/f05464ee-e487-453b-bc52-76e8eae65f4e-kube-api-access-q2mqs\") on node \"crc\" DevicePath \"\"" Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.358724 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f05464ee-e487-453b-bc52-76e8eae65f4e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f05464ee-e487-453b-bc52-76e8eae65f4e" (UID: "f05464ee-e487-453b-bc52-76e8eae65f4e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.371282 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f05464ee-e487-453b-bc52-76e8eae65f4e" path="/var/lib/kubelet/pods/f05464ee-e487-453b-bc52-76e8eae65f4e/volumes" Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.374900 4802 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f05464ee-e487-453b-bc52-76e8eae65f4e-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.662847 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.663332 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.793233 4802 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gxgzv_must-gather-mzj8b_f05464ee-e487-453b-bc52-76e8eae65f4e/copy/0.log" Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.794002 4802 scope.go:117] "RemoveContainer" containerID="52c4b2a00cc4b43af43698ea23c699f2eb721ab67dac6b2d07d79af508c75d8d" Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.794152 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gxgzv/must-gather-mzj8b" Oct 04 06:21:52 crc kubenswrapper[4802]: I1004 06:21:52.837754 4802 scope.go:117] "RemoveContainer" containerID="6e4e042e56e306a9451742a4212c8433a988617ffaeb7894a06a575e812692a5" Oct 04 06:22:16 crc kubenswrapper[4802]: I1004 06:22:16.424628 4802 scope.go:117] "RemoveContainer" containerID="44f137cb949cc99e75f4d43ab331c0509a54157ab0b4af33f73dc25fd2b57a1e" Oct 04 06:22:22 crc kubenswrapper[4802]: I1004 06:22:22.663380 4802 patch_prober.go:28] interesting pod/machine-config-daemon-dc98r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 04 06:22:22 crc kubenswrapper[4802]: I1004 06:22:22.666439 4802 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 04 06:22:22 crc kubenswrapper[4802]: I1004 06:22:22.666813 4802 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" Oct 04 06:22:22 crc kubenswrapper[4802]: I1004 06:22:22.668394 4802 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"52f04261b35dc0039092e24fa1e250e8637dbb1b4effa3bf770592a436e388dc"} pod="openshift-machine-config-operator/machine-config-daemon-dc98r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 04 06:22:22 crc kubenswrapper[4802]: I1004 06:22:22.668749 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" podUID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerName="machine-config-daemon" containerID="cri-o://52f04261b35dc0039092e24fa1e250e8637dbb1b4effa3bf770592a436e388dc" gracePeriod=600 Oct 04 06:22:23 crc kubenswrapper[4802]: I1004 06:22:23.142081 4802 generic.go:334] "Generic (PLEG): container finished" podID="611d63c9-e554-40be-aab2-f2ca43f6827b" containerID="52f04261b35dc0039092e24fa1e250e8637dbb1b4effa3bf770592a436e388dc" exitCode=0 Oct 04 06:22:23 crc kubenswrapper[4802]: I1004 06:22:23.142146 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerDied","Data":"52f04261b35dc0039092e24fa1e250e8637dbb1b4effa3bf770592a436e388dc"} Oct 04 06:22:23 crc kubenswrapper[4802]: I1004 06:22:23.142545 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dc98r" event={"ID":"611d63c9-e554-40be-aab2-f2ca43f6827b","Type":"ContainerStarted","Data":"1104a2d73e491bbf09ef1e30d7e9311caaead6061c7db06a0a8bf0d507b0b169"} Oct 04 06:22:23 crc kubenswrapper[4802]: I1004 06:22:23.142579 4802 scope.go:117] "RemoveContainer" containerID="1a3628d850f198e32b804a3aa103b98ef4f3e7b7abb6c32022a2649becc562f7" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.116835 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lhn85"] Oct 04 06:23:00 crc kubenswrapper[4802]: E1004 06:23:00.119275 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0857c271-d71e-4af3-babf-f6b7c9953fb4" containerName="extract-content" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.119373 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857c271-d71e-4af3-babf-f6b7c9953fb4" containerName="extract-content" Oct 04 06:23:00 crc kubenswrapper[4802]: E1004 06:23:00.119447 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05464ee-e487-453b-bc52-76e8eae65f4e" containerName="gather" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.119504 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05464ee-e487-453b-bc52-76e8eae65f4e" containerName="gather" Oct 04 06:23:00 crc kubenswrapper[4802]: E1004 06:23:00.119579 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05464ee-e487-453b-bc52-76e8eae65f4e" containerName="copy" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.119638 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05464ee-e487-453b-bc52-76e8eae65f4e" containerName="copy" Oct 04 06:23:00 crc kubenswrapper[4802]: E1004 06:23:00.119810 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0857c271-d71e-4af3-babf-f6b7c9953fb4" containerName="extract-utilities" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.119884 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857c271-d71e-4af3-babf-f6b7c9953fb4" containerName="extract-utilities" Oct 04 06:23:00 crc kubenswrapper[4802]: E1004 06:23:00.119953 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0857c271-d71e-4af3-babf-f6b7c9953fb4" containerName="registry-server" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.120012 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857c271-d71e-4af3-babf-f6b7c9953fb4" containerName="registry-server" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.120256 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="0857c271-d71e-4af3-babf-f6b7c9953fb4" containerName="registry-server" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.120362 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05464ee-e487-453b-bc52-76e8eae65f4e" containerName="copy" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.120462 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05464ee-e487-453b-bc52-76e8eae65f4e" containerName="gather" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.122202 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.144434 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhn85"] Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.215756 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0f3f703-279e-48c3-bb0d-d3685871f4d2-utilities\") pod \"redhat-operators-lhn85\" (UID: \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\") " pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.215842 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjnz2\" (UniqueName: \"kubernetes.io/projected/b0f3f703-279e-48c3-bb0d-d3685871f4d2-kube-api-access-pjnz2\") pod \"redhat-operators-lhn85\" (UID: \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\") " pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.215903 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0f3f703-279e-48c3-bb0d-d3685871f4d2-catalog-content\") pod \"redhat-operators-lhn85\" (UID: \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\") " pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.317752 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0f3f703-279e-48c3-bb0d-d3685871f4d2-utilities\") pod \"redhat-operators-lhn85\" (UID: \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\") " pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.317835 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjnz2\" (UniqueName: \"kubernetes.io/projected/b0f3f703-279e-48c3-bb0d-d3685871f4d2-kube-api-access-pjnz2\") pod \"redhat-operators-lhn85\" (UID: \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\") " pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.317889 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0f3f703-279e-48c3-bb0d-d3685871f4d2-catalog-content\") pod \"redhat-operators-lhn85\" (UID: \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\") " pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.318388 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0f3f703-279e-48c3-bb0d-d3685871f4d2-catalog-content\") pod \"redhat-operators-lhn85\" (UID: \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\") " pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.318439 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0f3f703-279e-48c3-bb0d-d3685871f4d2-utilities\") pod \"redhat-operators-lhn85\" (UID: \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\") " pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.344724 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjnz2\" (UniqueName: \"kubernetes.io/projected/b0f3f703-279e-48c3-bb0d-d3685871f4d2-kube-api-access-pjnz2\") pod \"redhat-operators-lhn85\" (UID: \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\") " pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.475028 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:00 crc kubenswrapper[4802]: I1004 06:23:00.958321 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhn85"] Oct 04 06:23:01 crc kubenswrapper[4802]: I1004 06:23:01.669424 4802 generic.go:334] "Generic (PLEG): container finished" podID="b0f3f703-279e-48c3-bb0d-d3685871f4d2" containerID="ee91ad4bae77bf746f1f2ec5feea27ae833c941db2542d4d68e98a7a7b80e2b7" exitCode=0 Oct 04 06:23:01 crc kubenswrapper[4802]: I1004 06:23:01.669834 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhn85" event={"ID":"b0f3f703-279e-48c3-bb0d-d3685871f4d2","Type":"ContainerDied","Data":"ee91ad4bae77bf746f1f2ec5feea27ae833c941db2542d4d68e98a7a7b80e2b7"} Oct 04 06:23:01 crc kubenswrapper[4802]: I1004 06:23:01.669873 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhn85" event={"ID":"b0f3f703-279e-48c3-bb0d-d3685871f4d2","Type":"ContainerStarted","Data":"e2475cb37a4b7fd6b0569786e23ad2016b3f1cfe3c68ff4a412a80cc4d6ef8bb"} Oct 04 06:23:03 crc kubenswrapper[4802]: I1004 06:23:03.698712 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhn85" event={"ID":"b0f3f703-279e-48c3-bb0d-d3685871f4d2","Type":"ContainerStarted","Data":"74ac1fd01fc1174ddf942c8f86a6859863a7011c630cf82d1d34b4d45c6d2cc1"} Oct 04 06:23:05 crc kubenswrapper[4802]: I1004 06:23:05.727066 4802 generic.go:334] "Generic (PLEG): container finished" podID="b0f3f703-279e-48c3-bb0d-d3685871f4d2" containerID="74ac1fd01fc1174ddf942c8f86a6859863a7011c630cf82d1d34b4d45c6d2cc1" exitCode=0 Oct 04 06:23:05 crc kubenswrapper[4802]: I1004 06:23:05.727144 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhn85" event={"ID":"b0f3f703-279e-48c3-bb0d-d3685871f4d2","Type":"ContainerDied","Data":"74ac1fd01fc1174ddf942c8f86a6859863a7011c630cf82d1d34b4d45c6d2cc1"} Oct 04 06:23:07 crc kubenswrapper[4802]: I1004 06:23:07.754877 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhn85" event={"ID":"b0f3f703-279e-48c3-bb0d-d3685871f4d2","Type":"ContainerStarted","Data":"5e68a7b10a4acb5729193c022c98162d943d615ba389d54e81c64055898d39d7"} Oct 04 06:23:07 crc kubenswrapper[4802]: I1004 06:23:07.793702 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lhn85" podStartSLOduration=2.886599644 podStartE2EDuration="7.793676734s" podCreationTimestamp="2025-10-04 06:23:00 +0000 UTC" firstStartedPulling="2025-10-04 06:23:01.672177668 +0000 UTC m=+5824.080178323" lastFinishedPulling="2025-10-04 06:23:06.579254758 +0000 UTC m=+5828.987255413" observedRunningTime="2025-10-04 06:23:07.785343096 +0000 UTC m=+5830.193343761" watchObservedRunningTime="2025-10-04 06:23:07.793676734 +0000 UTC m=+5830.201677379" Oct 04 06:23:10 crc kubenswrapper[4802]: I1004 06:23:10.475692 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:10 crc kubenswrapper[4802]: I1004 06:23:10.476021 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:11 crc kubenswrapper[4802]: I1004 06:23:11.544744 4802 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhn85" podUID="b0f3f703-279e-48c3-bb0d-d3685871f4d2" containerName="registry-server" probeResult="failure" output=< Oct 04 06:23:11 crc kubenswrapper[4802]: timeout: failed to connect service ":50051" within 1s Oct 04 06:23:11 crc kubenswrapper[4802]: > Oct 04 06:23:20 crc kubenswrapper[4802]: I1004 06:23:20.544311 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:20 crc kubenswrapper[4802]: I1004 06:23:20.638364 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:20 crc kubenswrapper[4802]: I1004 06:23:20.790858 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhn85"] Oct 04 06:23:21 crc kubenswrapper[4802]: I1004 06:23:21.938765 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lhn85" podUID="b0f3f703-279e-48c3-bb0d-d3685871f4d2" containerName="registry-server" containerID="cri-o://5e68a7b10a4acb5729193c022c98162d943d615ba389d54e81c64055898d39d7" gracePeriod=2 Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.447498 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.480837 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0f3f703-279e-48c3-bb0d-d3685871f4d2-utilities\") pod \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\" (UID: \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\") " Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.480940 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjnz2\" (UniqueName: \"kubernetes.io/projected/b0f3f703-279e-48c3-bb0d-d3685871f4d2-kube-api-access-pjnz2\") pod \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\" (UID: \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\") " Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.480995 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0f3f703-279e-48c3-bb0d-d3685871f4d2-catalog-content\") pod \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\" (UID: \"b0f3f703-279e-48c3-bb0d-d3685871f4d2\") " Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.482513 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0f3f703-279e-48c3-bb0d-d3685871f4d2-utilities" (OuterVolumeSpecName: "utilities") pod "b0f3f703-279e-48c3-bb0d-d3685871f4d2" (UID: "b0f3f703-279e-48c3-bb0d-d3685871f4d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.491783 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0f3f703-279e-48c3-bb0d-d3685871f4d2-kube-api-access-pjnz2" (OuterVolumeSpecName: "kube-api-access-pjnz2") pod "b0f3f703-279e-48c3-bb0d-d3685871f4d2" (UID: "b0f3f703-279e-48c3-bb0d-d3685871f4d2"). InnerVolumeSpecName "kube-api-access-pjnz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.583424 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0f3f703-279e-48c3-bb0d-d3685871f4d2-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.583468 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjnz2\" (UniqueName: \"kubernetes.io/projected/b0f3f703-279e-48c3-bb0d-d3685871f4d2-kube-api-access-pjnz2\") on node \"crc\" DevicePath \"\"" Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.618012 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0f3f703-279e-48c3-bb0d-d3685871f4d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0f3f703-279e-48c3-bb0d-d3685871f4d2" (UID: "b0f3f703-279e-48c3-bb0d-d3685871f4d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.685164 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0f3f703-279e-48c3-bb0d-d3685871f4d2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.953206 4802 generic.go:334] "Generic (PLEG): container finished" podID="b0f3f703-279e-48c3-bb0d-d3685871f4d2" containerID="5e68a7b10a4acb5729193c022c98162d943d615ba389d54e81c64055898d39d7" exitCode=0 Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.953268 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhn85" event={"ID":"b0f3f703-279e-48c3-bb0d-d3685871f4d2","Type":"ContainerDied","Data":"5e68a7b10a4acb5729193c022c98162d943d615ba389d54e81c64055898d39d7"} Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.953340 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhn85" event={"ID":"b0f3f703-279e-48c3-bb0d-d3685871f4d2","Type":"ContainerDied","Data":"e2475cb37a4b7fd6b0569786e23ad2016b3f1cfe3c68ff4a412a80cc4d6ef8bb"} Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.953372 4802 scope.go:117] "RemoveContainer" containerID="5e68a7b10a4acb5729193c022c98162d943d615ba389d54e81c64055898d39d7" Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.953365 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhn85" Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.995457 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhn85"] Oct 04 06:23:22 crc kubenswrapper[4802]: I1004 06:23:22.995616 4802 scope.go:117] "RemoveContainer" containerID="74ac1fd01fc1174ddf942c8f86a6859863a7011c630cf82d1d34b4d45c6d2cc1" Oct 04 06:23:23 crc kubenswrapper[4802]: I1004 06:23:23.004587 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lhn85"] Oct 04 06:23:23 crc kubenswrapper[4802]: I1004 06:23:23.033799 4802 scope.go:117] "RemoveContainer" containerID="ee91ad4bae77bf746f1f2ec5feea27ae833c941db2542d4d68e98a7a7b80e2b7" Oct 04 06:23:23 crc kubenswrapper[4802]: I1004 06:23:23.061206 4802 scope.go:117] "RemoveContainer" containerID="5e68a7b10a4acb5729193c022c98162d943d615ba389d54e81c64055898d39d7" Oct 04 06:23:23 crc kubenswrapper[4802]: E1004 06:23:23.063109 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e68a7b10a4acb5729193c022c98162d943d615ba389d54e81c64055898d39d7\": container with ID starting with 5e68a7b10a4acb5729193c022c98162d943d615ba389d54e81c64055898d39d7 not found: ID does not exist" containerID="5e68a7b10a4acb5729193c022c98162d943d615ba389d54e81c64055898d39d7" Oct 04 06:23:23 crc kubenswrapper[4802]: I1004 06:23:23.063164 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e68a7b10a4acb5729193c022c98162d943d615ba389d54e81c64055898d39d7"} err="failed to get container status \"5e68a7b10a4acb5729193c022c98162d943d615ba389d54e81c64055898d39d7\": rpc error: code = NotFound desc = could not find container \"5e68a7b10a4acb5729193c022c98162d943d615ba389d54e81c64055898d39d7\": container with ID starting with 5e68a7b10a4acb5729193c022c98162d943d615ba389d54e81c64055898d39d7 not found: ID does not exist" Oct 04 06:23:23 crc kubenswrapper[4802]: I1004 06:23:23.063197 4802 scope.go:117] "RemoveContainer" containerID="74ac1fd01fc1174ddf942c8f86a6859863a7011c630cf82d1d34b4d45c6d2cc1" Oct 04 06:23:23 crc kubenswrapper[4802]: E1004 06:23:23.067014 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74ac1fd01fc1174ddf942c8f86a6859863a7011c630cf82d1d34b4d45c6d2cc1\": container with ID starting with 74ac1fd01fc1174ddf942c8f86a6859863a7011c630cf82d1d34b4d45c6d2cc1 not found: ID does not exist" containerID="74ac1fd01fc1174ddf942c8f86a6859863a7011c630cf82d1d34b4d45c6d2cc1" Oct 04 06:23:23 crc kubenswrapper[4802]: I1004 06:23:23.067061 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74ac1fd01fc1174ddf942c8f86a6859863a7011c630cf82d1d34b4d45c6d2cc1"} err="failed to get container status \"74ac1fd01fc1174ddf942c8f86a6859863a7011c630cf82d1d34b4d45c6d2cc1\": rpc error: code = NotFound desc = could not find container \"74ac1fd01fc1174ddf942c8f86a6859863a7011c630cf82d1d34b4d45c6d2cc1\": container with ID starting with 74ac1fd01fc1174ddf942c8f86a6859863a7011c630cf82d1d34b4d45c6d2cc1 not found: ID does not exist" Oct 04 06:23:23 crc kubenswrapper[4802]: I1004 06:23:23.067097 4802 scope.go:117] "RemoveContainer" containerID="ee91ad4bae77bf746f1f2ec5feea27ae833c941db2542d4d68e98a7a7b80e2b7" Oct 04 06:23:23 crc kubenswrapper[4802]: E1004 06:23:23.068022 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee91ad4bae77bf746f1f2ec5feea27ae833c941db2542d4d68e98a7a7b80e2b7\": container with ID starting with ee91ad4bae77bf746f1f2ec5feea27ae833c941db2542d4d68e98a7a7b80e2b7 not found: ID does not exist" containerID="ee91ad4bae77bf746f1f2ec5feea27ae833c941db2542d4d68e98a7a7b80e2b7" Oct 04 06:23:23 crc kubenswrapper[4802]: I1004 06:23:23.068097 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee91ad4bae77bf746f1f2ec5feea27ae833c941db2542d4d68e98a7a7b80e2b7"} err="failed to get container status \"ee91ad4bae77bf746f1f2ec5feea27ae833c941db2542d4d68e98a7a7b80e2b7\": rpc error: code = NotFound desc = could not find container \"ee91ad4bae77bf746f1f2ec5feea27ae833c941db2542d4d68e98a7a7b80e2b7\": container with ID starting with ee91ad4bae77bf746f1f2ec5feea27ae833c941db2542d4d68e98a7a7b80e2b7 not found: ID does not exist" Oct 04 06:23:24 crc kubenswrapper[4802]: I1004 06:23:24.380539 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0f3f703-279e-48c3-bb0d-d3685871f4d2" path="/var/lib/kubelet/pods/b0f3f703-279e-48c3-bb0d-d3685871f4d2/volumes" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.616364 4802 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5mmw5"] Oct 04 06:23:30 crc kubenswrapper[4802]: E1004 06:23:30.617899 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0f3f703-279e-48c3-bb0d-d3685871f4d2" containerName="extract-utilities" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.617928 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0f3f703-279e-48c3-bb0d-d3685871f4d2" containerName="extract-utilities" Oct 04 06:23:30 crc kubenswrapper[4802]: E1004 06:23:30.617964 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0f3f703-279e-48c3-bb0d-d3685871f4d2" containerName="registry-server" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.617977 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0f3f703-279e-48c3-bb0d-d3685871f4d2" containerName="registry-server" Oct 04 06:23:30 crc kubenswrapper[4802]: E1004 06:23:30.618060 4802 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0f3f703-279e-48c3-bb0d-d3685871f4d2" containerName="extract-content" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.618078 4802 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0f3f703-279e-48c3-bb0d-d3685871f4d2" containerName="extract-content" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.618489 4802 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0f3f703-279e-48c3-bb0d-d3685871f4d2" containerName="registry-server" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.621954 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.635277 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5mmw5"] Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.686370 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvbsf\" (UniqueName: \"kubernetes.io/projected/c347f466-e23c-4f3a-a806-3312d9d70710-kube-api-access-hvbsf\") pod \"community-operators-5mmw5\" (UID: \"c347f466-e23c-4f3a-a806-3312d9d70710\") " pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.686494 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c347f466-e23c-4f3a-a806-3312d9d70710-catalog-content\") pod \"community-operators-5mmw5\" (UID: \"c347f466-e23c-4f3a-a806-3312d9d70710\") " pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.686566 4802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c347f466-e23c-4f3a-a806-3312d9d70710-utilities\") pod \"community-operators-5mmw5\" (UID: \"c347f466-e23c-4f3a-a806-3312d9d70710\") " pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.788375 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvbsf\" (UniqueName: \"kubernetes.io/projected/c347f466-e23c-4f3a-a806-3312d9d70710-kube-api-access-hvbsf\") pod \"community-operators-5mmw5\" (UID: \"c347f466-e23c-4f3a-a806-3312d9d70710\") " pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.788453 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c347f466-e23c-4f3a-a806-3312d9d70710-catalog-content\") pod \"community-operators-5mmw5\" (UID: \"c347f466-e23c-4f3a-a806-3312d9d70710\") " pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.788491 4802 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c347f466-e23c-4f3a-a806-3312d9d70710-utilities\") pod \"community-operators-5mmw5\" (UID: \"c347f466-e23c-4f3a-a806-3312d9d70710\") " pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.789104 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c347f466-e23c-4f3a-a806-3312d9d70710-utilities\") pod \"community-operators-5mmw5\" (UID: \"c347f466-e23c-4f3a-a806-3312d9d70710\") " pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.790149 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c347f466-e23c-4f3a-a806-3312d9d70710-catalog-content\") pod \"community-operators-5mmw5\" (UID: \"c347f466-e23c-4f3a-a806-3312d9d70710\") " pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:30 crc kubenswrapper[4802]: I1004 06:23:30.811204 4802 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvbsf\" (UniqueName: \"kubernetes.io/projected/c347f466-e23c-4f3a-a806-3312d9d70710-kube-api-access-hvbsf\") pod \"community-operators-5mmw5\" (UID: \"c347f466-e23c-4f3a-a806-3312d9d70710\") " pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:31 crc kubenswrapper[4802]: I1004 06:23:31.021930 4802 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:31 crc kubenswrapper[4802]: I1004 06:23:31.490156 4802 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5mmw5"] Oct 04 06:23:31 crc kubenswrapper[4802]: W1004 06:23:31.495495 4802 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc347f466_e23c_4f3a_a806_3312d9d70710.slice/crio-a599c83d03e3be0104a88ccfb2cf831a30cafd5e9f69cb017956987ad0620d31 WatchSource:0}: Error finding container a599c83d03e3be0104a88ccfb2cf831a30cafd5e9f69cb017956987ad0620d31: Status 404 returned error can't find the container with id a599c83d03e3be0104a88ccfb2cf831a30cafd5e9f69cb017956987ad0620d31 Oct 04 06:23:32 crc kubenswrapper[4802]: I1004 06:23:32.061430 4802 generic.go:334] "Generic (PLEG): container finished" podID="c347f466-e23c-4f3a-a806-3312d9d70710" containerID="eb079703e7a183c4a5b4fb145b7c2bcaf9e15e0c6320c08132e723172ddb9f34" exitCode=0 Oct 04 06:23:32 crc kubenswrapper[4802]: I1004 06:23:32.062155 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mmw5" event={"ID":"c347f466-e23c-4f3a-a806-3312d9d70710","Type":"ContainerDied","Data":"eb079703e7a183c4a5b4fb145b7c2bcaf9e15e0c6320c08132e723172ddb9f34"} Oct 04 06:23:32 crc kubenswrapper[4802]: I1004 06:23:32.062229 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mmw5" event={"ID":"c347f466-e23c-4f3a-a806-3312d9d70710","Type":"ContainerStarted","Data":"a599c83d03e3be0104a88ccfb2cf831a30cafd5e9f69cb017956987ad0620d31"} Oct 04 06:23:33 crc kubenswrapper[4802]: I1004 06:23:33.086141 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mmw5" event={"ID":"c347f466-e23c-4f3a-a806-3312d9d70710","Type":"ContainerStarted","Data":"cc6d8ef06e4baed25b512c20dfaf9076e78eb1b3652241e2e1ac28d86e0a6a2b"} Oct 04 06:23:34 crc kubenswrapper[4802]: I1004 06:23:34.102694 4802 generic.go:334] "Generic (PLEG): container finished" podID="c347f466-e23c-4f3a-a806-3312d9d70710" containerID="cc6d8ef06e4baed25b512c20dfaf9076e78eb1b3652241e2e1ac28d86e0a6a2b" exitCode=0 Oct 04 06:23:34 crc kubenswrapper[4802]: I1004 06:23:34.102753 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mmw5" event={"ID":"c347f466-e23c-4f3a-a806-3312d9d70710","Type":"ContainerDied","Data":"cc6d8ef06e4baed25b512c20dfaf9076e78eb1b3652241e2e1ac28d86e0a6a2b"} Oct 04 06:23:35 crc kubenswrapper[4802]: I1004 06:23:35.116165 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mmw5" event={"ID":"c347f466-e23c-4f3a-a806-3312d9d70710","Type":"ContainerStarted","Data":"563bfe50dfeb535d3ddceace27a6369a96485f9161e3764dfb3edb9e6b790578"} Oct 04 06:23:35 crc kubenswrapper[4802]: I1004 06:23:35.155932 4802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5mmw5" podStartSLOduration=2.699355053 podStartE2EDuration="5.155900884s" podCreationTimestamp="2025-10-04 06:23:30 +0000 UTC" firstStartedPulling="2025-10-04 06:23:32.067751794 +0000 UTC m=+5854.475752429" lastFinishedPulling="2025-10-04 06:23:34.524297605 +0000 UTC m=+5856.932298260" observedRunningTime="2025-10-04 06:23:35.142411349 +0000 UTC m=+5857.550412004" watchObservedRunningTime="2025-10-04 06:23:35.155900884 +0000 UTC m=+5857.563901549" Oct 04 06:23:41 crc kubenswrapper[4802]: I1004 06:23:41.023293 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:41 crc kubenswrapper[4802]: I1004 06:23:41.023740 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:41 crc kubenswrapper[4802]: I1004 06:23:41.089528 4802 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:41 crc kubenswrapper[4802]: I1004 06:23:41.244267 4802 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:41 crc kubenswrapper[4802]: I1004 06:23:41.338570 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5mmw5"] Oct 04 06:23:43 crc kubenswrapper[4802]: I1004 06:23:43.208081 4802 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5mmw5" podUID="c347f466-e23c-4f3a-a806-3312d9d70710" containerName="registry-server" containerID="cri-o://563bfe50dfeb535d3ddceace27a6369a96485f9161e3764dfb3edb9e6b790578" gracePeriod=2 Oct 04 06:23:43 crc kubenswrapper[4802]: I1004 06:23:43.886176 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:43 crc kubenswrapper[4802]: I1004 06:23:43.986512 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c347f466-e23c-4f3a-a806-3312d9d70710-catalog-content\") pod \"c347f466-e23c-4f3a-a806-3312d9d70710\" (UID: \"c347f466-e23c-4f3a-a806-3312d9d70710\") " Oct 04 06:23:43 crc kubenswrapper[4802]: I1004 06:23:43.986651 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c347f466-e23c-4f3a-a806-3312d9d70710-utilities\") pod \"c347f466-e23c-4f3a-a806-3312d9d70710\" (UID: \"c347f466-e23c-4f3a-a806-3312d9d70710\") " Oct 04 06:23:43 crc kubenswrapper[4802]: I1004 06:23:43.986719 4802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvbsf\" (UniqueName: \"kubernetes.io/projected/c347f466-e23c-4f3a-a806-3312d9d70710-kube-api-access-hvbsf\") pod \"c347f466-e23c-4f3a-a806-3312d9d70710\" (UID: \"c347f466-e23c-4f3a-a806-3312d9d70710\") " Oct 04 06:23:43 crc kubenswrapper[4802]: I1004 06:23:43.988136 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c347f466-e23c-4f3a-a806-3312d9d70710-utilities" (OuterVolumeSpecName: "utilities") pod "c347f466-e23c-4f3a-a806-3312d9d70710" (UID: "c347f466-e23c-4f3a-a806-3312d9d70710"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:23:43 crc kubenswrapper[4802]: I1004 06:23:43.999162 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c347f466-e23c-4f3a-a806-3312d9d70710-kube-api-access-hvbsf" (OuterVolumeSpecName: "kube-api-access-hvbsf") pod "c347f466-e23c-4f3a-a806-3312d9d70710" (UID: "c347f466-e23c-4f3a-a806-3312d9d70710"). InnerVolumeSpecName "kube-api-access-hvbsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.088989 4802 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c347f466-e23c-4f3a-a806-3312d9d70710-utilities\") on node \"crc\" DevicePath \"\"" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.089025 4802 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvbsf\" (UniqueName: \"kubernetes.io/projected/c347f466-e23c-4f3a-a806-3312d9d70710-kube-api-access-hvbsf\") on node \"crc\" DevicePath \"\"" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.141825 4802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c347f466-e23c-4f3a-a806-3312d9d70710-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c347f466-e23c-4f3a-a806-3312d9d70710" (UID: "c347f466-e23c-4f3a-a806-3312d9d70710"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.190971 4802 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c347f466-e23c-4f3a-a806-3312d9d70710-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.224212 4802 generic.go:334] "Generic (PLEG): container finished" podID="c347f466-e23c-4f3a-a806-3312d9d70710" containerID="563bfe50dfeb535d3ddceace27a6369a96485f9161e3764dfb3edb9e6b790578" exitCode=0 Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.224258 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mmw5" event={"ID":"c347f466-e23c-4f3a-a806-3312d9d70710","Type":"ContainerDied","Data":"563bfe50dfeb535d3ddceace27a6369a96485f9161e3764dfb3edb9e6b790578"} Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.224286 4802 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mmw5" event={"ID":"c347f466-e23c-4f3a-a806-3312d9d70710","Type":"ContainerDied","Data":"a599c83d03e3be0104a88ccfb2cf831a30cafd5e9f69cb017956987ad0620d31"} Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.224302 4802 scope.go:117] "RemoveContainer" containerID="563bfe50dfeb535d3ddceace27a6369a96485f9161e3764dfb3edb9e6b790578" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.224446 4802 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mmw5" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.276554 4802 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5mmw5"] Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.283726 4802 scope.go:117] "RemoveContainer" containerID="cc6d8ef06e4baed25b512c20dfaf9076e78eb1b3652241e2e1ac28d86e0a6a2b" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.285336 4802 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5mmw5"] Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.323692 4802 scope.go:117] "RemoveContainer" containerID="eb079703e7a183c4a5b4fb145b7c2bcaf9e15e0c6320c08132e723172ddb9f34" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.349160 4802 scope.go:117] "RemoveContainer" containerID="563bfe50dfeb535d3ddceace27a6369a96485f9161e3764dfb3edb9e6b790578" Oct 04 06:23:44 crc kubenswrapper[4802]: E1004 06:23:44.349629 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"563bfe50dfeb535d3ddceace27a6369a96485f9161e3764dfb3edb9e6b790578\": container with ID starting with 563bfe50dfeb535d3ddceace27a6369a96485f9161e3764dfb3edb9e6b790578 not found: ID does not exist" containerID="563bfe50dfeb535d3ddceace27a6369a96485f9161e3764dfb3edb9e6b790578" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.349701 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"563bfe50dfeb535d3ddceace27a6369a96485f9161e3764dfb3edb9e6b790578"} err="failed to get container status \"563bfe50dfeb535d3ddceace27a6369a96485f9161e3764dfb3edb9e6b790578\": rpc error: code = NotFound desc = could not find container \"563bfe50dfeb535d3ddceace27a6369a96485f9161e3764dfb3edb9e6b790578\": container with ID starting with 563bfe50dfeb535d3ddceace27a6369a96485f9161e3764dfb3edb9e6b790578 not found: ID does not exist" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.349732 4802 scope.go:117] "RemoveContainer" containerID="cc6d8ef06e4baed25b512c20dfaf9076e78eb1b3652241e2e1ac28d86e0a6a2b" Oct 04 06:23:44 crc kubenswrapper[4802]: E1004 06:23:44.350310 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc6d8ef06e4baed25b512c20dfaf9076e78eb1b3652241e2e1ac28d86e0a6a2b\": container with ID starting with cc6d8ef06e4baed25b512c20dfaf9076e78eb1b3652241e2e1ac28d86e0a6a2b not found: ID does not exist" containerID="cc6d8ef06e4baed25b512c20dfaf9076e78eb1b3652241e2e1ac28d86e0a6a2b" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.350401 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc6d8ef06e4baed25b512c20dfaf9076e78eb1b3652241e2e1ac28d86e0a6a2b"} err="failed to get container status \"cc6d8ef06e4baed25b512c20dfaf9076e78eb1b3652241e2e1ac28d86e0a6a2b\": rpc error: code = NotFound desc = could not find container \"cc6d8ef06e4baed25b512c20dfaf9076e78eb1b3652241e2e1ac28d86e0a6a2b\": container with ID starting with cc6d8ef06e4baed25b512c20dfaf9076e78eb1b3652241e2e1ac28d86e0a6a2b not found: ID does not exist" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.350427 4802 scope.go:117] "RemoveContainer" containerID="eb079703e7a183c4a5b4fb145b7c2bcaf9e15e0c6320c08132e723172ddb9f34" Oct 04 06:23:44 crc kubenswrapper[4802]: E1004 06:23:44.351163 4802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb079703e7a183c4a5b4fb145b7c2bcaf9e15e0c6320c08132e723172ddb9f34\": container with ID starting with eb079703e7a183c4a5b4fb145b7c2bcaf9e15e0c6320c08132e723172ddb9f34 not found: ID does not exist" containerID="eb079703e7a183c4a5b4fb145b7c2bcaf9e15e0c6320c08132e723172ddb9f34" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.351230 4802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb079703e7a183c4a5b4fb145b7c2bcaf9e15e0c6320c08132e723172ddb9f34"} err="failed to get container status \"eb079703e7a183c4a5b4fb145b7c2bcaf9e15e0c6320c08132e723172ddb9f34\": rpc error: code = NotFound desc = could not find container \"eb079703e7a183c4a5b4fb145b7c2bcaf9e15e0c6320c08132e723172ddb9f34\": container with ID starting with eb079703e7a183c4a5b4fb145b7c2bcaf9e15e0c6320c08132e723172ddb9f34 not found: ID does not exist" Oct 04 06:23:44 crc kubenswrapper[4802]: I1004 06:23:44.376217 4802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c347f466-e23c-4f3a-a806-3312d9d70710" path="/var/lib/kubelet/pods/c347f466-e23c-4f3a-a806-3312d9d70710/volumes"